id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
225047497
pes2o/s2orc
v3-fos-license
Gadoxetic acid-based hepatobiliary MRI in hepatocellular carcinoma Background & Aims SORAMIC is a prospective phase II randomised controlled trial in hepatocellular carcinoma (HCC). It consists of 3 parts: a diagnostic study and 2 therapeutic studies with either curative ablation or palliative Yttrium-90 radioembolisation combined with sorafenib. We report the diagnostic cohort study aimed to determine the accuracy of gadoxetic acid-enhanced magnetic resonance imaging (MRI), including hepatobiliary phase (HBP) imaging features compared with contrast-enhanced computed tomography (CT). The primary objective was the accuracy of treatment decisions stratifying patients for curative or palliative (non-ablation) treatment. Methods Patients with clinically suspected HCC underwent gadoxetic acid-enhanced MRI (HBP MRI, including dynamic MRI) and contrast-enhanced CT. Blinded read of the image data was performed by 2 reader groups (radiologists, R1 and R2). A truth panel with access to all clinical data and follow-up imaging served as reference. Imaging criteria for curative ablation were defined as up to 4 lesions <5 cm and absence of macrovascular invasion. The primary endpoint was non-inferiority of HBP MRI vs. CT in a first step and superiority in a second step. Results The intent-to-treat population comprised 538 patients. Treatment decisions matched the truth panel assessment in 83.3% and 81.2% for HBP MRI (R1 and R2), and 73.4% and 70.8% for CT. Non-inferiority and superiority (second step) of HBP MRI vs. CT were demonstrated (odds ratio 1.14 [1.09–1.19]). HBP MRI identified patients with >4 lesions significantly more frequently than CT. Conclusions In HCC, HBP MRI provided a more accurate decision than CT for a curative vs. palliative treatment strategy. Lay summary Patients with hepatocellular carcinoma are allocated to curative or palliative treatment according to the stage of their disease. Hepatobiliary imaging using gadoxetic acid-enhanced MRI is more accurate than CT for treatment decision-making. Protocol Version and Amendments The statistical analysis plan for the first interim analysis is based on the final study protocol 3.0 of study SORAMIC dated 15/APR/2013. There are two country-specific protocols, version 3.2 (10/MAR/2014) which ran in the United Kingdom and version 3.3 (11/MAR/2014), which ran in France. In version 3.2, a different imaging schedule was permitted for patients in the local ablation arm. Version 3.3 incorporated additional ECG monitoring after the administration of sorafenib. Applicable Standards None. Review Report None. Other Documents None. Study Design (verbatim from last protocol version) This study will be conducted as a controlled, randomized clinical trial. The study is designed with the intent to follow typical diagnostic-therapeutic pathways in hepatocellular carcinoma (HCC). An overview on the trial design is given in Figure 1. Patients with a diagnosis of HCC will be eligible for this trial. In the screening phase, patients will receive Primovist®-enhanced magnetic resonance imaging (MRI) and contrast-enhanced computed tomography (CT) (if not already performed within 4 weeks prior to entry into this study) for the assessment of disease stage. This first study phase is called "screening phase" because this phase serves screening purposes for the therapeutic part of the SORAMIC trial. However, the diagnostic procedures performed during this phase will be part of the overall efficacy analysis of the SORAMIC trial: Primovist®enhanced MRI and contrast-enhanced CT will be compared with regard to treatment decisions in the diagnostic sub-study. The stage of disease will be determined by the local investigators at the end of the screening phase using all available clinical information including Primovist®-enhanced MRI and contrast-enhanced CT. Assessment of disease stage will be guided by Barcelona Clinic Liver Cancer (BCLC) criteria. Patients with BCLC stages A, B, and C can be included into this study provided that they have liver-dominant disease and do not present with pulmonary metastases. Based on disease stage, a treatment strategy with palliative intent or with local ablation will be chosen (patients not meeting the inclusion criteria for the local ablation or palliative study groups will leave the study once the diagnostic part is completed). In general, patients with BCLC stage A will be eligible for the local ablation study group; patients with BLCL stages B and C will be eligible for the palliative study group. In order to reflect the ablation potential of RFA, patients will be eligible for treatment with local ablation in this trial in the case that they present with up to 4 tumor lesions with a maximum diameter of 5 cm each (i.e., 2 RFA sessions with ablation of 2 lesions in each RFA session). The decision for a treatment strategy with local ablation or with palliative intent will reside with the responsible physicians at the trial sites. In the local ablation group, patients will be randomized after completion of RFA to receive • sorafenib i.e. RFA + Sorafenib, OR • placebo i.e. RFA + Placebo. In the palliative treatment group, patients will be randomized to receive • SIR-Spheres® therapy + sorafenib, OR • sorafenib alone. In the local ablation group, patients will receive continuous treatment with sorafenib or matching placebo until disease recurrence (maximum duration of the local ablation study group: 24 months after "last patient first visit"). Patients will be randomized to sorafenib or placebo on the basis of 1:1 after completion of RFA. Treatment will start at a reduced dose level at day 3 after completion of local treatment and will be increased to full dose (400 mg sorafenib bid) at day 10. The dose of sorafenib (or matching placebo) may be reduced in case of adverse events; details and rules regarding dose modification are outlined in section 5.5.1. Patients will be followed at 2-months intervals until recurrence; follow-up imaging (contrastenhanced CT and/or Primovist®enhanced MRI) will be conducted as clinically indicated but no less frequently than every 3 months. The diagnosis of recurrence is made by the local investigator and confirmed by at least one external radiologist with experience in contrastenhanced CT and Primovist®-enhanced MRI of the liver (this external radiologist can be an investigator in another participating center). The patient remains in the study until the local investigator and the external radiologist agree on the presence of recurrence. Patients who do not complete local therapy as planned will be excluded from the study (technical failures). The interventional radiologist will assess if RFA has been successful. One re-intervention of a maximum of 2 lesions is allowed within 2 weeks of the last "regular" ablation session (in this case, the patient may have undergone a total of 3 ablation sessions). In the palliative treatment group, about 52% of the patients will be randomized to receive SIR-Spheres® therapy (SIRT). SIRT will be performed after exclusion of relevant hepato-pulmonary shunts using 99m Tc-labeled macroaggregated albumin (MAA) and after exclusion of relevant risk of microsphere misplacement in extrahepatic organs as recommended by the manufacturer. Patients who are randomized to receive SIRT and in whom SIRT cannot be performed (technical reasons, pulmonary lung shunting), will remain in the study and will be switched to the sorafenib only arm. SIRT will be performed on a sequential lobar basis. If both liver lobes are involved, SIRT is administered in 2 treatment sessions 4-6 weeks apart; i.e., the first liver lobe will be treated in the first treatment session, the second lobe will be treated in the second treatment session 4-6 weeks later (the lobe with the higher tumor load is to be treated first; this will most frequently be the right liver lobe). Patients who are not able to receive at least 30% of the prescribed dose of SIR-Spheres® will be excluded from the study. Patients may be re-treated with SIRT once during follow-up, if clinically indicated (only patients who have been randomized to the SIRT arm). No cross-over from the sorafenib only arm to the SIRT + sorafenib arm is allowed. All patients of the palliative group will receive sorafenib treatment with a target dose of 400 mg bid. Sorafenib is the current standard of care in HCC patients treated with palliative intent and is, therefore, not regarded as "study medication" in the context of the palliative treatment group of this trial. Sorafenib treatment will start 3 days after completion of SIRT in all patients randomized to the SIRT arm, or within 7 days after randomization in the remaining patients. Treatment will start at a reduced dose level at day 3 after completion of local / locoregional treatment and will be increased to full dose (400 mg sorafenib bid) at day 10. The dose of sorafenib may be reduced in case of adverse events; details and rules regarding dose modification are outlined in section 5.5.1. Patients will be followed at 2 months intervals; diagnostic imaging is not required in the palliative group in the context of the SORAMIC trial but will be performed at the discretion of the local investigator. If diagnostic imaging is performed during follow-up, the results must be reported on the CRF and the images will be collected. Patients who discontinue sorafenib in either arm of the palliative group due to unmanageable toxicity (see section 5.5.1, Tables 2, 3, and 4) and/or progression of disease may receive additional medication (including experimental medications) after discontinuation of sorafenib (e.g., other types of chemotherapy) and will remain on study. The maximum duration of the palliative study group is 24 months after "last patient first visit". The DSMB closely monitored the first patients receiving SIRT + sorafenib. Based on a careful safety assessment, the DSMB had the authority to recommend changes in the timing of the start of sorafenib treatment (e.g., start of sorafenib treatment later than 3 days after the last SIRT session), or even recommend to stop recruitment to the palliative study group. The working mode of the DSMB is detailed in the DSMB charter; the stopping rule for the palliative group is outlined in section 8.9 of the study protocol. No such changes were recommended. In both groups (local ablation group and palliative treatment group), blood will be collected for the determination of serum / plasma levels of biologic response markers (BRMs, biomarkers). Serum / plasma levels of BRMs can serve as indicators of the host response to the treatment interventions with respect to the up-or down-regulation of various antiangiogenic and inflammatory mechanisms that play a critical role in controlling cancer propagation and metastasis. Measurement of serial changes in the expression levels of these modifiers will serve to determine whether these therapies cause an up-or down-regulation of BRMs. Each sample will be analysed for angiogenic factors and other BRMs as appropriate using protein macroarrays. In addition, tissue from liver tumors shall be stored for later analysis of molecular expression patterns, provided that liver biopsy has been performed for clinical reasons before the first session of microtherapy (liver biopsy is not part of the SORAMIC trial). Primary Objectives 1. In patients in whom local ablation therapy is appropriate (local ablation group), to determine if the sorafenib in combination with radiofrequency ablation (RFA) prolongs the time-to-recurrence (TTR) in comparison with RFA + placebo. 2. In patients in whom RFA is NOT appropriate (palliative treatment group), to determine if the combination of yttrium-90 microspheres (SIRT) + sorafenib improves the overall survival (OS) in comparison to sorafenib alone. 3. To confirm in a 2-step procedure that Primovist®-enhanced MRI is non-inferior (first step) or superior (second step) compared with contrast-enhanced multislice CT for assignment of patients to a palliative vs. local ablation treatment strategy. The overall study is successful if the primary objectives 1 OR 2 are met AND Primovist®-enhanced MRI is at least non-inferior to contrast-enhanced CT for treatment decisions. Secondary Objectives • to assess health-related quality of life • to compare the number of detected lesions and the diagnostic confidence in Primovist®-enhanced MRI with contrast-enhanced CT • to compare Primovist®-enhanced MRI with contrast-enhanced CT regarding the detection of recurrence (patients in the local ablation study group only) • to assess the safety of the combination of RFA + sorafenib in comparison to RFA+ placebo • to assess the safety of the combination of SIR-Spheres® and sorafenib therapy in comparison to sorafenib therapy alone • to assess in the palliative study group overall survival separately for patients with and without portal thrombosis Primary Efficacy Variable(s) The primary efficacy variable in the local ablation group is the time-to-recurrence (TTR defined as the time from randomization to recurrence, assessed from the blinded read; the time of recurrence is the earliest evidence of recurrence in contrast-enhanced CT or MRI). In the absence of a TTR event, TTR time will be censored at the date of the last disease assessment. Subjects alive who do not have post baseline disease assessment will have their TTR times censored one day after the date of randomization. The primary efficacy variable in the palliative treatment group is overall survival, which is defined as the time from randomization to death due to any cause in the palliative study arm. Patients with no event at the date of cut-off will be censored at the last documented visit at the study site. The primary efficacy variable in the diagnostic sub-study is the number of correct assignments to the local ablation / palliative treatment strategy with respect to a truth panel assessment as standard of reference (SOR). Secondary Efficacy Variables The secondary efficacy variables in the local ablation group are • Time-to-recurrence (TTR defined as the time from randomization to recurrence, assessed from the clinical interpretation of contrast-enhanced CT and/or the Primovist®-enhanced MRI) • Time point of detection of recurrence (local ablation group; Primovist®-enhanced MRI vs. contrastenhanced CT; as assessed in the blinded read), modified criteria for Primovist®-enhanced MRI. These modified criteria will be specified in the Blinded Read Manual before start of the blinded read and will include all evidence available at that time. Based on current knowledge, these criteria will include: o Arterial enhancement plus portal-venous washout plus hypointensity in hepatobiliary phase (typical HCC) o Arterial enhancement plus portal-venous washout with iso-to hyperintensity in hepatobiliary phase (well-differentiated HCC) o Arterial enhancement without portal-venous wash-out plus hypointensity in hepatobiliary phase (strong indication for HCC) • Overall survival (OS) in patients treated with local ablation: sorafenib vs. placebo (defined as the time from the date of randomization to the date of death due to any cause) • Patient reported outcomes (PROs), defined as health-related quality of life using the self-administered FACT-G in local language (FACT-HEP if available in local language) • Local control rate (defined as the time from the date of randomization to the date of local progress for locally treated lesions). In RFA a lesion based analysis will be applied; local recurrence is diagnosed if the center of the new lesion is located within 5 mm of the thermal scar (RFA). The secondary efficacy variables in the palliative treatment group are • Patient reported outcomes (PROs), defined as health-related quality of life using the self administered FACT-G in local language (FACT-HEP if available in local language) • 30-day mortality Exploratory • Time to first AFP progression (50% or more increase compared to NADIR) • AFP response (50% or more decrease to baseline at any point in time) The secondary efficacy variables in the diagnostic sub-study are • Confidence in therapeutic decision; per lesion; 4-point scale: very confident -confident -not confident -not confident at all • Number of HCC lesions detected by diagnostic procedure • Size of smallest HCC lesion detected by diagnostic procedure Coding dictionaries AEs will be coded by MedDRA Version 15.1. No other coding is planned. Populations for Analysis The definition of the full analysis set was redefined compared to the study protocol to better and more accurately reflect the intent-to-treat principle. Safety Analysis Set The safety analysis set includes all patients who were treated. The patients will be analyzed as treated, independent of any randomization errors. The safety analysis set is used for safety analyses. Full Analysis Set The full analysis set (FAS) follows the intent-to-treat (ITT) principle and consists of all patients for whom CRF entries are available. Patients will be analysed as randomized. The FAS is analyzed for demographics. The FAS will also be used as primary analysis set for efficacy of the local ablation and palliative group and as secondary analysis set of the diagnostic arm. Per Protocol Set The per protocol (PP) set is a subset of the FAS excluding patients with major protocol deviations. The PP set will also be used as secondary analysis set for efficacy of the local ablation and palliative groups and as primary analysis set of the diagnostic arm. Interim analyses The interim efficacy analyses for the palliative treatment group are to occur after 80 and 160 deaths have been reported, and the nominal critical points for these interim analyses (and p-value) is 3.710 (p<0.0001) and 2.511 (p=0.006). The final analysis of the palliative treatment group will be performed after 240 reported deaths with a nominal critical point of 1.993 (p=0.0231). Depending on the outcome of the interim analysis, the palliative group may be stopped due to efficacy or in case of non-significance, the study will proceed. Subgroup analyses All subgroup analyses are regarded exploratory. No adjustment for multiplicity is planned. Gender-specific analyses will be performed for the primary and secondary efficacy variables of the palliative and the local ablation study group. In the local ablation group, separate analyses will be performed ( Rules for incomplete data Missing data will not be replaced. Patients with no event will be censored at the last available visit documented in the CRF and entered into the database. Rules for efficacy Not applicable. Body System: Adverse events will be categorized by MedDRA preferred term and system organ class (SOC) (MedDRA version 15.1). Attribution of the AE to Study Drug: A frequency table will be presented showing the information on attribution of the AE to study drug using the categorization given on the CRFs: Moreover, this categorization will be used in data listings. Rules for laboratory data Not applicable. Rules for vital signs Not applicable. Rules for physical examination Not applicable. Other rules Not applicable. Demography, Medical History, Concomitant Medication, Study Medication Descriptive statistics (n, mean, standard deviation, median, minimum and maximum) will be calculated for quantitative variables; frequency counts by category will be given for qualitative variables. Confidence intervals will be given where appropriate. If not stated otherwise, these intervals will be two sided in each case and provide 95% confidence. Individual listings will be provided for each parameter examined in this clinical study. Concomitant medication and medical history will be listed only (if no codes are available). Local Ablation Group The objective is to evaluate in patients with hepatocellular carcinoma treated with local ablation if the combination of RFA + sorafenib prolongs the time-to-recurrence in comparison with RFA + placebo. The TTR is based on the assessments of the three blinded readers evaluating the follow-up images of MRI and CT. The median of the TTRs of the three readers will be used for the analysis of the TTR. TTR is defined as the time between assignment to the local ablation group and documented recurrence. TTR as the primary endpoint is evaluated by the Kaplan-Meier method (product-limit method) to compute nonparametric estimates of the survivor functions. Right censoring will be taken into account. The survival curves will be compared between both treatment groups with the stratified log-rank test at an error level of α = 0.05 (one-sided test). No stratifying factor is defined for this study arm. The following null hypothesis H0 curative : TTRRFA+sorafenib = TTRRFA+placebo will be tested with a one-sided alpha of 5% against H1 curative : TTRRFA+sorafenib > TTRRFA+placebo Superiority of RFA + sorafenib can be concluded when the one-sided stratified log-rank test is significant. Palliative Group The objective is to evaluate in patients with hepatocellular carcinoma treated with palliative intent if the combination of SIRT + sorafenib improves the overall survival (OS) in comparison to sorafenib alone. OS as the primary endpoint is evaluated by the Kaplan-Meier method (product-limit method) to compute nonparametric estimates of the survivor functions. Right censoring will be taken into account. The survival curves will be compared between both treatment groups with the log-rank test at the specified error levels α (one-sided tests). The stratifying factor "PVT" (yes vs. no) will be taken into account in the Cox proportional model. Superiority of SIRT + sorafenib can be concluded when the one-sided log-rank test is significant. Diagnostic Sub-Study The number patients with correct assignment into the categories "local ablation", "palliative", and "none of these" will be assessed based on a blinded reading with respect to the assessment of a truth panel as SOR. The primary analysis of the primary efficacy variable will be done with generalized estimating equations (GEEs) with independent working correlation matrix taking into account the correlations between readers and between modalities through robust variance estimates. As primary analysis, the following hypotheses will be tested with a z test based on GEEs at a one-sided level of significance of 2.5%. H0 diag,1.step : accuracyMRI -accuracyCT ≤ -δ will be tested with an one-sided alpha of 2.5% against H1 diag,1.step : accuracyMRI -accuracyCT > -δ The null hypothesis H0 can be rejected, if the two-sided 95% confidence interval (CI) for the difference in accuracies is completely above -δ. A difference of -5%-points was regarded as clinically relevant. Therefore the non-inferiority margin was set to -5%-points which is equivalent to an odds ratio of 0.75. If the lower limit of the 95% confidence interval for the odds ratio of the accuracies of MRI (including all available images) and CT is above this limit, non-inferiority of the MRI will be concluded. In a second step, the superiority of MRI over CT will be tested, when the non-inferiority is shown. Following hypotheses will be tested with a one-sided level of significance of 2.5%. H0 diag,2.step : accuracyMRI -accuracyCT ≤ 0 will be tested with an one-sided alpha of 2.5% against H1 diag,2.step : accuracyMRI -accuracyCT > 0 The null hypothesis H0 can be rejected and superiority concluded, when the two-sided 95% confidence interval (CI) for the odds ratio of the accuracies of MRI and CT is completely above 1. As additional analysis, the analysis will be repeated using only the perfusion MRI vs. CT. Global hypotheses for the Primary Efficacy Variables The global study hypotheses are defined as follows: H0 global : H0 diag,1.step cannot be rejected or H0 palliative and H0 curative cannot be rejected. The study is successful, when the diagnostic sub-study with regard to non-inferiority and either treatment group (local ablation or palliative) are successful. As the local ablation group and the palliative treatment group enroll different patients, and as the null hypotheses of the diagnostic sub-study and the treatment groups both have to be rejected, no adjustment of the level of significance is needed in this study to keep the overall alpha. Also the overall power is not affected as the power for the diagnostic study with given sample size from the two treatment groups is close to 100% and those of the treatment groups at least 80% Local Ablation Group All secondary efficacy analyses for the local ablation group will be done descriptively comparing the two treatment groups. The OS and the comparison of the TTR for patients treated with RFA will be done analogously to the method used in section 3.6.1.1. HR-QoL will be analysed by an appropriate analysis of variance. The local control rate (time from the date of randomization to the date of local progress for locally treated lesions) will be analysed descriptively on a lesion basis. Palliative Group All secondary efficacy analyses for the palliative group will be done descriptively comparing the two treatment groups. He-QoL will be analysed by an appropriate analysis of variance. Diagnostic Sub-Study The time point of correct detection of recurrence will be compared descriptively between MRI and CT based on the assessments of the blinded readers with regard to the treatment group. A first analysis is based on established criteria, a second analysis will be based on established criteria for CT and modified criteria for MRI. The correctness will be assessed with the diagnosis of the center as standard of reference. These analyses will be done for following units: the liver as a whole, the two liver lobes, the eight liver segments, and by lesion. Page 15 of 19 The reported morphology criteria for the description of HCCs of the three readers will be listed. The agreement rate of MRI and CT in the detection of affected segments in the eight liver segments overall will be estimated and 95% confidence intervals will be provided. The correlation between the readers will be taken into account by applying appropriate statistical methods (GEEs). For all other secondary efficacy variables, descriptive statistics (n, mean, standard deviation, median, minimum, and maximum) will be calculated for each quantitative variable. Absolute and relative frequencies will be given for categorical data. Safety Analysis The safety analysis will be based on the safety analysis set. All safety analyses will be done by study group and treatment arm. In addition, data from the treatment arms will be pooled for assessment of the safety of sorafenib. Descriptive statistics (n, mean, standard deviation, median, minimum and maximum) will be calculated for each quantitative variable; frequency counts by category will be made for each qualitative variable. The treatment application data will be summarized including time of start of study treatment. Adverse Events The frequencies of adverse events will be reported by study group and treatment arm. Tabulations will be provided for body systems, severity, seriousness, intensity, main pattern, study drug or device action, causal relationship to study drug or device, causal relationship to study conduct, and outcome of the AE. Any withdrawals from the study due to adverse events will be reported. Analyses will also been done for AEs by CTCAE grade, serious AEs (SAEs), AEs of special interest and AEs leading to drug discontinuation and study discontinuation. For pre-treatment events summary tables and listings will be provided. Laboratory Data and Vital Signs Further safety assessments will determine frequencies and percentages of significant changes in results of vital signs using criteria from CTCAE grading. Clinically significant results are defined as events of CTCAE grade ≥ 3 as follows (NCI CTCAE version 4.03): • Systolic blood pressure ≥160 mmHg • Diastolic blood pressure ≥ 100 mmHg For laboratory data, tables of transition in/out of the normal range, and percent change from baseline to followup visits in increments of 10 will be presented. Cross-tables will be added for baseline vs. follow-up timepoints for the laboratory parameters with regard to clinical significant findings. . Other Analyses Quality-of-life questionnaires FACT-G and FACT-HEP will be assessed by changes from baseline as an average over the follow-up timepoints using an mixed model for repeated measures (MMRM). Missing data will not be replaced.
2020-08-27T09:05:23.856Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "0abcf32e17c06c7c87959373dbd4779ff9bec7b3", "oa_license": "CCBYNCND", "oa_url": "http://www.jhep-reports.eu/article/S2589555920301075/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0594d241da2426dbe7e9fd3d838832a02b62c19f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10204113
pes2o/s2orc
v3-fos-license
Discovering disease-disease associations by fusing systems-level molecular data The advent of genome-scale genetic and genomic studies allows new insight into disease classification. Recently, a shift was made from linking diseases simply based on their shared genes towards systems-level integration of molecular data. Here, we aim to find relationships between diseases based on evidence from fusing all available molecular interaction and ontology data. We propose a multi-level hierarchy of disease classes that significantly overlaps with existing disease classification. In it, we find 14 disease-disease associations currently not present in Disease Ontology and provide evidence for their relationships through comorbidity data and literature curation. Interestingly, even though the number of known human genetic interactions is currently very small, we find they are the most important predictor of a link between diseases. Finally, we show that omission of any one of the included data sources reduces prediction quality, further highlighting the importance in the paradigm shift towards systems-level data fusion. D isease Ontology (DO) 1 is a well established classification and ontology of human diseases. It integrates disease nomenclature through inclusion and cross mapping of disease-specific terms and identifiers from Medical Subject Headings (MeSH) 2 , World Health Organization (WHO) International Classification of Diseases (ICD) 3 , Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) 4 , National Cancer Institute (NCI) thesaurus 5 and Online Mendelian Inheritance in Man (OMIM) 6 . It relates and classifies human diseases based on pathological analysis and clinical symptoms. However, the growing number of heterogeneous genomic, proteomic, transcriptomic and metabolic data currently does not contribute to this classification. Understanding of even the most straightforward monogenic classic Mendelian disorders is limited without considering interactions between mutations and biochemical and physiological characteristics. Hence, redefining human disease classification to include evidence from heterogeneous data is expected to improve prognosis and response to therapy 7 . In this paper we examine whether inclusion of modern molecular level data can improve disease classification. Several studies have reported on efforts and benefits of relating human diseases through their molecular causes. Loscalzo et al. 7 catalogued diseases through a network-based analysis of associations among genes, proteins, metabolites, intermediate phenotype and environmental factors that influence pathophenotype. Gulbahce et al. 8 constructed a ''viral disease network'' of disease associations to decipher the interplay between viruses and disease phenotypes. They uncovered several diseases that have not previously been associated with infection by the corresponding viruses. A similar approach was used by Lee et al. 9 to gain insights into disease relationships through a network derived from metabolic data instead of virological implications. They demonstrated that known metabolic coupling between enzyme-associated diseases reveal comorbidity patterns between diseases in patients. Goh et al. 10 studied the position of disease genes within the human interactome in order to predict new cancer-related genes. Conversely, a gene-centric approach to disease association discovery was used by Linghu et al. 11 : they took 110 diseases for which a set of disease genes are known, and compared gene sets and their positions within the gene network to infer associations of related diseases. More details can be found in two recent surveys of current network analysis methods aimed at giving insights into human disease 12,13 , as well as in a review of different data sources that can provide complementary disease-relevant information 14 . A challenge in relating diseases and molecular data is in the multitude of information sources. Disease profiling may include data from genetics, genomics, transcriptomics, metabolomics or any other omics, all potentially related to susceptibility, progress and manifestation of disease. Such data may be related on their own: for example, information on transcription factor binding sites, gene and protein interactions, drugtarget associations, various ontologies and other less-structured knowledge bases, such as literature repositories, are all inter-dependent and it is not trivial to integrate them in a way that will yield new information about diseases. This stresses the need for an integrated approach of current models to exploit all these heterogeneous data simultaneously when inferring new associations between diseases 13 . Data from heterogeneous sources of information can be integrated by data fusion 15 . Common fusion approaches follow early or late integration strategies, combining inputs 16 or predictions 17 , respectively. Another and often preferred approach is an intermediate integration, which preserves the structure of the data while inferring a single model [18][19][20] . An excellent example of intermediate integration is multiple kernel learning that convexly combines several kernel matrices constructed from available data sources 15,21 . Data fusion has been successfully applied for tasks such as gene prioritisation 15,21,22 , or gene network reconstruction and function prediction 16,23 . To our knowledge, we present the first application of data fusion to disease association mining. We choose the intermediate data fusion approach for its accuracy of inferring prediction models (i.e. how well a model can learn to predict disease-disease associations) and the ability to explicitly measure the contribution of each data set to the extracted knowledge 18,19 . Kernel-based fusion can only use data sources expressed in the ''disease space'', i.e. all data sources have to be expressed as kernel matrices encoding relationships between diseases, which may incur loss associations. The shown block-based matrix representation exactly corresponds to the data fusion schema in Figure 3-A. We combine 11 data sources on four different types of objects (see Methods): drugs, genes, Disease Ontology (DO) terms and Gene Ontology (GO) terms. These data are encoded in two types of matrices: constraint matrices, which relate objects of the same type (such as drugs if they have common adverse effects) and are placed on the main diagonal (illustrated by matrices with blue entries); and relation matrices, which relate objects of different types and are placed off the main diagonal (illustrated by matrices with grey entries). Our data fusion approach involves three main steps. First, we construct a block-based matrix representation of all data sources used in our study (panel A, left). The molecular data encoded in these matrices are sparse, incomplete and noisy (depicted by different shades of blue and grey) and some matrices are completely missing because associated data sources are not available (e.g. no link between GO terms and drugs). In the second step, we simultaneously decompose all relation matrices as products of low-rank matrix factors and use constraint matrices to regularise low-rank approximations of relation matrices. The key idea of our data fusion approach is sharing low-rank matrix factors between relation matrices that describe objects of common type. The resulting factorised system (panel A, middle) contains matrix factors that are specific to every type of objects (four matrices in left part; e.g. G Drug ), and matrix factors that are specific to every data source (six matrix factors in right part; e.g. S Gene, DO Term ). Thus, low-rank matrix factors capture source-and object type-specific patterns. Finally, we use matrix factors to reconstruct relation matrices and complete their unobserved entries (panel A, right). Panel B shows the algorithm for assigning diseases to classes and obtaining disease-disease association predictions. www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 3202 | DOI: 10.1038/srep03202 of information when transforming circumstantial data sources into appropriate feature space. In our study, most of the data sources are only indirectly related to diseases, hence we employ an alternative and recently proposed intermediate data fusion algorithm by matrix factorisation 24 , which has an accuracy comparable to kernel-based fusion approaches, but can treat all data sources directly (i.e. no transformation of data into ''disease space'' is necessary). The key idea of our data fusion approach lies in sharing of low-rank matrix factors between data sources that describe biological data of the same type. For instance, genes are one data type which can be linked to other data types such as Gene Ontology (GO) terms or diseases through two distinct data sources, namely GO annotations and disease-gene mapping. The fused factorised system contains matrix factors that are specific to every molecular data type, as well as matrix factors that are specific to every data source. Thus, low-rank matrix factors can simultaneously capture both source-and object typespecific patterns. We report on the ability of our recently developed data fusion approach to mine human disease-disease associations. Starting from Disease Ontology, we revise the links between diseases using related systems-level data, including protein-protein and genetic interactions, gene co-expressions, metabolic data, drug-target relations, and other (see Methods). By fusing these data we identify several disease-disease associations that were not present in Disease Ontology and validate their existence by finding strong support in the literature and significant comorbidity effects in associated diseases. We also quantify the contribution of each molecular data source to the integrated disease-disease association model. Results We fuse systems-level molecular data by using our recently developed matrix-factorisation approach (described in Methods) to gain new insight into the current state-of-the-art human disease classification. This large-scale data integration results in 108 highly reliable disease classes (each corresponding to a clique in the consensus matrix, C; see Methods section and Algorithm in Figure 1-B). Size distribution of the 108 disease classes is as follows: 60 disease classes contain 2 diseases; 31 disease classes contain 3 or 4 diseases; 9 disease classes contain 5, 6 or 7 diseases; 5 disease classes contain 8, 9 or 10 diseases; 2 disease classes contain 11 or 17 diseases; and 1 disease class contains 146 diseases. For each class we examine the associations between its member diseases to inspect how the obtained classes align with currently accepted disease classification. Using Disease Ontology (DO) and literature curation, we find that the 107 smaller classes successfully capture closely-related diseases that are also placed near each other in DO (see below for details). Also, we find that in the largest identified disease class (i.e. the one containing 146 diseases), the most represented major disease is cancer (31.5%), followed by nervous system diseases (14.4%), inherited metabolic disorders (9.6%) and immune system diseases (5.5%). This class primarily contains diseases of anatomical entity (45.2%), cellular proliferation (25.4%) and metabolic diseases (14.3%), with other major concepts of DO being rarely represented. The large size of this class may reflect the following underlying biases in various data sources -its constituents represent either larger majority groups in DO, or minority groups at a lower level of ontology: . diseases of anatomical entity, because diseases are often described based on tissue/organ; . cellular proliferation, because of the heavy enrichment of cancers and the sub-classification of these into many variant diseases, also possibly driven by rich gene/pathway annotation around cell cycle and proliferation; . metabolic diseases, because of significant representation of metabolic diseases and significant understanding of metabolic pathways. Metabolic disease is a primary focus for systems modelling and simulation, as much is known from pathways and a wealth of omics data available. Since the obtained distribution appears unbalanced due to one large class containing 146 diseases, we further decompose that class by repeating data fusion analysis on its disease members. This effectively gives us a multi-layer hierarchical breakdown of disease classes (see Figure 2). The large class is broken down into 10 classes (only those observed in all 15 inferred models are taken into account; see Methods section). The distribution of disease class sizes is: 9 disease classes with 2 or 3 diseases, and 1 disease class with 51 diseases. The diseases captured by the 9 smaller classes are: two classes consist of cancer diseases, three consist of inherited metabolic disorders, one contains nervous system diseases, two contain respiratory system diseases, and the last one has cardiovascular system diseases. The largest disease class (containing 51 disease members) is further decomposed into 8 disease classes. The distribution of disease class sizes at this level of hierarchy is: 7 disease classes with 2 or 3 diseases, and 1 disease class with 18 diseases. The diseases captured by the 7 smaller classes are: two classes with immune system diseases, one class with cognitive disorders, one class with acquired metabolic diseases, one with cancer, and the last three were split between cognitive disorders and metabolic diseases. The largest class (containing 18 disease members; again, under the most stringent agreement threshold; see Methods) is finally decomposed into six conserved diseases (the remaining 12 diseases grouped less reliably under our stringent threshold): lung metastasis, dysgerminoma, serous cystadenoma (cellular proliferation and cancer), abetalipoproteinemia (metabolic disorder), related factor XIII deficiency and plasmodium falciparum malaria. Diseases in captured classes exhibit significant comorbidity. A comorbidity relationship exists between diseases whenever they affect the same individual substantially more than expected by chance. We want to know whether diseases assigned to the same disease class by our data fusion method exhibit higher comorbidity than diseases assigned to different classes. Hidalgo et al. 25 proposed two comorbidity measures (http://barabasilab.neu.edu/projects/ hudine) to quantify the distance between two diseases: a relative risk (defined below) and Pearson's correlation between prevalences of two diseases (w). A relative risk (RR) of two diseases is defined as the fraction between the number of patients diagnosed with both diseases and random expectation based on disease prevalence. Expressing the strength of comorbidity is difficult because different statistical distance measures are biased to under-or over-estimating the relationships between rare and prevalent diseases. The RR overestimates associations between rare diseases and underestimates associations involving highly prevalent diseases, whereas w has low values for diseases with extremely different prevalence, but is good at recognising comorbidities between disease pairs of similar prevalence. We find that 66 (out of 107) disease classes have a significantly higher comorbidity than what would be expected by chance (p-value , 0.001 with Bonferroni multiple comparison correction applied to all p-values). We assess the statistical significance by randomly sampling disease sets of the same size as the disease class in question, and computing the comorbidity enrichment scores of the sampled sets according to the two comorbidity measures, RR and w, as proposed by Hidalgo et al. 5 . The enrichment score is then computed as the mean of comorbidity values between all disease pairs in a disease class. For subsequent layers of hierarchical decomposition of the largest disease class (i.e. the one containing 146 diseases), we find that: 7 out of 10 first level disease classes have a significantly higher comorbidity (measured by both RR and w) than what would be expected by chance; comorbidity data was available for only 3 out of 8 second-level disease classes, and 2 of them exhibited significantly higher comorbidity than what would be expected by chance. Evaluating disease classes through Disease Ontology. To see how well our fusion approach captures disease-disease associations already present in the semantic structure of DO, we look at the overlap between 107 disease classes (again, we perform enrichment analysis of the largest above-described class separately, see below) and find that 79 classes have at least 80% of disease members directly connected in DO via is_a relationship; an example of one such disease class is given in Figure 3-B. We assess the statistical significance of such a high number of classes being enriched in known relations from DO by computing the p-value as follows. First, we remove all DO-related information (i.e. we remove the constraint matrix H 2 ; see Methods) and then we perform the data fusion again without any prior information on relationships between diseases. We find that such a high number of classes is unlikely to be enriched in known relations from DO by chance (p-value , 0.001). This result is very interesting as it indicates that DO could, in principle, be reconstructed from molecular data only. Our findings suggest that disease classification derived from pathological analysis and clinical symptoms (DO) can be largely reproduced by considering only molecular data. In other words, data fusion of different types of evidence could be used to infer a hierarchy of disease relations whose coverage and power might be very similar to those of the manually curated DO. The decomposition of the largest disease class yields similar results: 5 out of 9 first-level classes have their members directly linked in DO via is_a relationships; 4 out of 7 second-level disease classes have their members directly linked in DO via is_a relationships; the third-level class of size six does not significantly overlap with the DO graph, but is partially supported by literature 26 . Finding new links between diseases. In addition to examining classes of multiple diseases, we can use our fused model to rank individual disease-disease associations based on supporting molecular evidence, and make novel predictions linking previously seemingly unrelated diseases. Among all the highest-ranked diseasedisease associations in the fused model (i.e. disease pairs from the most stable classes -obtained in step 3 of Algorithm in Figure 1-Bwith less than 6 disease members), we find 14 associations not recorded in Disease Ontology. We perform literature curation and find evidence for all 14 of the predicted disease associations (Table 2). Such high accuracy is due to our choice to take a highly stringent approach that requests the association to be observed in all 15 of the inferred models (see Methods for details). Comorbidity data were available for 4 out of 14 predicted disease associations and all 4 of these disease-disease associations were found to have significantly high comorbidity: Contribution of each data source to the fused model. We have seen that data fusion can successfully retrieve existing and uncover new associations between diseases. Now we examine the contribution of each individual data source to the final disease-disease association model. We estimate the relative importance of each of the fused data sources in predicting disease associations by comparing the quality of the inferred model that includes the data source, to the quality of the model that excludes it. The measured quality is represented by a tuple of residual sum of squares (RSS; lower values are better) and explained variance (Evar; higher values are better; see 24 for details) of gene-disease relationship matrix R 12 (see Methods). So an increase in RSS and a decrease in Evar hinder the quality of the inferred model, and conversely, a decrease in RSS and an increase in Evar improve the quality of the inferred model. We find that omission of each of the five data sources that specify interactions between genes (H . We further decompose the largest class by re-running the data fusion process on set of diseases that are in the largest class in order to identify its fine-grained structure (level one). We repeat data fusion analysis using this top-down strategy two more times (levels two and three), which results in a hierarchical decomposition of most reliable disease classes (see Methods). 13.3%. This result is unexpected, because the number of available genetic interactions is small (511). This may confirm the proposed importance of genetic interactions and functional buffering as being critical for understanding disease evolution and for design of new therapeutic approaches 27 . Although the dataset of genetic interactions is currently small, the observed interactions are more likely to be causative as opposed to correlative and may therefore have less noise associated, hence they appear to be more informative and have a larger importance on relationships between diseases than other data sources. Exclusion of other sources results in a smaller decrease in quality (Table 3), but nevertheless, these results confirm that all of the fused data sources contribute to the quality of the model. Discussion We integrate a wide range of modern systems-level molecular interaction and ontology data using our recently proposed data-fusion approach, and apply it to finding relationships between diseases previously unrecorded in DO. We validate our findings through comorbidity data and literature curation to demonstrate that such a systems-level integration can recover known and successfully identify currently unrecorded relationships between diseases. When searching for disease-disease associations not present in DO, we considered only those associations that are present in all of the inferred models. This conservative approach gave us 14 diseasedisease association predictions which we validated through literature and comorbidity data. Relaxing the threshold of association to be predicted, i.e. requiring a disease-disease association to be present in 95%, 90%, 85% or fewer of inferred models yields a higher number of predicted disease associations. For instance, we found 89 associations unrecorded by DO when requiring them to be present in at least 80% of the models. Exploring the effects of lowering this threshold remains a subject of future research, as we were able to demonstrate our goal to find potentially useful associations using the most stringent threshold. Specifically, two of the fourteen predicted disease-disease associations -between gastric lymphoma and crescentic glomerulonephritis, and between Cushing's syndrome and Hodgkin's lymphoma -demonstrate the ability of the approach to find interesting novel links, but also highlight the fact that it is not possible to determine causal from correlative relationships (which, indeed, in many cases may not be known), given our current scientific understanding. Perhaps even more interesting is the fact that the newly identified relations between diseases could, in principle, be used to systematically update and extend DO, or even develop a parallel data-driven hierarchy of disease relations. Utilising data fusion for disease re-classification, as well as linking these results with genomewide association studies (GWAS) is a subject open to future research. We show that all available molecular data -regardless of their sparseness -are important for effective integration. Surprisingly, we find that genetic interaction data are the most predictive underlying factor of disease-disease associations despite their current small size. The flexibility of our data fusion approach allows us to extend the model with new data sources or omit some sources of information to study their effects on predictive performance. We only require that the underlying graph of data fusion scheme (Figure 3-A) be connected. This gives our data fusion algorithm the power to share latent representations of object types between different data sources. For instance, we cannot omit data on drug targets (R 14 in Figure 3-A) without also removing data on adverse side-effects of drug combinations (H 4 ). Thus, we report in Results on the quality of all models that exclude any reasonable first-order combination of data sources and use these data to estimate contributions of data sources to the quality of the fused model. Since our data fusion approach is a semi-supervised learning method, it is less prone to over-fitting than supervised methods, i.e. ones that make distinctions between objects on the basis of predefined class label information. Additionally, in order to avoid over-fitting, we selected data fusion parameters through internal cross-validation and used constraint matrices -which express the notion that a pair of similar objects of the same type, such as a pair of drugs or a pair of diseases, should be close in their latent component space -to impose penalties on matrix factors. Thus, the observed reduction in model quality when any one of the included data sets is omitted is caused by the exclusion of complementary information provided by the data set rather than by the lack of robustness of the model. We have seen the role of data fusion in successful retrieval of existing and uncovering of novel links between diseases. Future improvements of such a comprehensive integration of molecular data would allow better understanding of underlying mechanisms Methods Data sources. In this study, we integrate biological data on objects of four different types (nodes in Figure 3-A): genes, diseases (Disease Ontology terms), drugs, and Gene Ontology (GO) terms. We observe them through 11 sources of information (edges in Figure 3-A). Every source of information is represented by a distinct data matrix that either relates objects of two different types (such as drugs and their associated target proteins) or objects of the same type (such as genetic interactions between genes): relations between objects of types i and j are represented by a relation matrix, R ij , and relations between objects of the same type i are represented by a constraint matrix, H i . Table 1 summarises all 11 data sets. Disease data. The principal source of information on human disease associations is Disease Ontology (DO) 1 . DO semantically combines medical and disease vocabularies and addresses the complexity of disease nomenclature through extensive cross-mapping of DO terms to standard clinical and medical terminologies of MeSH, ICD, NCI's thesaurus, SNOMED and OMIM. It is designed to reflect the current knowledge of human diseases and their associations with phenotype, environment and genetics. We extract 1,536 DO terms from the latest version of the disease ontology hosted by the OBO Foundry (http://www.obofoundry.org) and construct a binary matrix R 12 from 22,084 associations between genes and diseases. DO leverages the semantic richness through linking terms by computable relationships in the hierarchy (e.g. mediastinum ganglioneuroblastoma is_a peripheral nervous system ganglioneuroblastoma, which is_a ganglioneuroblastoma and then in turn is_a neuroblastoma) first by etiology and then by the affected body system. We use the semantic structure of DO to reason over is_a relations. Since entries in the constraint matrices are positive for objects that are not similar and negative for objects that are similar, the constraint between two DO terms in H 2 is set to 20.8 hops , where hops is the length of the path between corresponding terms in DO graph. We empirically chose 0.8 from [0, 1] range -0 meaning that no two terms in the DO graph are related, and 1 meaning that two DO terms are always related (regardless of the path distance between them in the DO graph) -by performing standardised internal cross-validation using values between 0 and 1 with a 0.1 step (i.e. 0, 0.1, 0.2, …, 1). Scores of multiple parentage (multiple is_a relationships) are summed to produce the final value of semantic association. Throughout the paper, we use disease and DO term interchangeably, which both refer to a unique DO identifier (DOID). Gene ontology data. We use relations between 11,853 distinct genes and 100,685 gene annotations that are given by Gene Ontology (GO) 28 to construct a binary matrix of direct annotations R 13 . Topology of the GO graph is included by reasoning over is_a, part_of and has_part relations between GO terms to populate H 3 in the same way as H 2 with the constraint between two GO terms set to 20.9 hops . Drug data. We obtain drug data from DrugCard entries in the DrugBank (http:// www.drugbank.ca) database that contains chemical, pharmacological and pharmaceutical drug information with comprehensive drug target details. Our model contains 4,477 distinct drugs, each identified by a DrugBank accession number. Drugs are related to their target proteins in R 14 , which is populated by 7,977 binary drug-target relationships from DrugBank. We use reported side-effects of drug combinations form DrugBank as 21,821 binary indicators of interactions between drugs in H 4 . Gene interaction data. We obtain the relationships between genes from five sources of interaction data (top five rows in Table 1). Genes are identified by their NCBI gene IDs. We first map the approved gene symbols and Uniprot IDs to Entrez gene IDs using the index files from HGNC database 29 , downloaded in November 2012. This is done to convert all gene annotations, drug-target, and co-expression data into NCBI IDs. To increase coverage of gene and protein interaction data, we include all genes (or equivalently, proteins) for which at least two supporting pieces of information were available in any of the data sources listed in Table 1. In total, these sources include: 55,787 protein-protein interactions (PPIs) between 10,360 proteins (H Data fusion by matrix factorisation. We infer human disease-disease associations by integrating a multitude of relevant molecular data sources. We use a data mining approach based on matrix representation of these molecular data, which works by simultaneous matrix tri-factorisation 24 with sharing of matrix factors. The fusion consists of three main steps (illustrated in Figure 1-A). First, we construct relation and constraint matrices from all available data (Figure 3-A). Recall that a relation matrix encodes relations between objects of two different types (e.g. gene to Gene Ontology term annotation) and a constraint matrix describes relations between objects of the same type (e.g. protein-protein interactions). Then, we simultaneously factorise the relation matrices under given constraints, and finally we score statistically significant associations in the matrix decomposition and identify disease classes (details below and in Žitnik & Zupan (2013) 24 ). Approximate matrix factorisation estimates data matrix R ij [ R ni|nj as a product of low rank matrix factors, R ij < G i S ij G T j , found by solving an optimisation problem. Here, matrix factors are G i [ R ni|ki , S ij [ R ki|kj and G j [ R nj|kj . Factorisation ranks k i and k j are chosen to be smaller than both n i and n j (k i = n i and k j = n j ), which results in the compressed version of the original matrix R ij . Profiles (row vectors in R ij ) of many objects of type i are represented by relatively few vectors from S ij and low dimensional vectors in G i and G j . Therefore, a good approximation can only be estimated if these vectors span a space that reveals some latent structure present in the original data. The key idea of our data fusion approach is matrix factor sharing when we simultaneously decompose all relation matrices. Matrix factor G i is shared across decompositions of relation matrices that relate objects of type i to objects of some other type, whereas S ij is used only in decomposing R ij . Factor S ij in our factorised system is thus specific for a relation matrix R ij and factor G i is specific for object type i. They capture source-and object type-specific patterns, respectively. The objective function minimised by the fusion algorithm enforces a good approximation of the input matrices and is regularised by using available constraint matrices presented in H (t) : where : k k and tr(?) denote Frobenius norm and trace, respectively (they are commonly used in matrix approximation tasks). Input to our data fusion algorithm consists of five constraint block matrices H (t) , 1 # t # 5 due to five sources of interaction data that represent relations between genes, and a relation block matrix R: The second, third and fourth block along the main diagonal of H (t) is zero for t . 1 because we have a single constraint matrix per disease, drug, and GO term object types. To avoid data redundancy we encode only explicit relations between objects. Such representation leads to zero off-diagonal blocks in R instead of relation matrices R 23 , R 24 , R 32 , R 34 , R 42 and R 43 and to symmetry of relation matrices (R ji~R T ij , S ji~S T ij ). The notion of transitivity between relations is inherently considered by fusion algorithm. Data fusion algorithm outputs the block matrix factors G and S, which we use to identify disease classes: Notice that each block of matrix R is simultaneously approximated as is shared among all matrices that relate objects of i-th (j-th) type to any other object type. That is different from treating R as a single homogeneous data matrix, which performs poorly 24 . Parameters of the fusion algorithm are factorisation ranks, k i , which determine the degree of dimension reduction for four object types in our fusion schema. These factorisation ranks are selected from a predefined set of possible values to optimise the quality of the model in its ability to reconstruct the input data from gene-disease relation matrix R 12 . For example, gene-disease profiles of length <1, 500 in the original space are reduced to profiles with <70 factors in data fusion space. We find this approach to be robust and small variations in initial parameter tuning do not impede the overall final quality of the fused system (data not shown). In our study, factorisation ranks of 50 to 80 yield models of similar quality. In general, we find that if the data contain meaningful information (as opposed to randomised input), the optimised factorisation ranks are much smaller than input dimensions because these data can be effectively compressed, and low-dimensional representation will provide a good estimate of the target relation matrix. Conversely, this would not hold true if we were to predict arbitrarily assigned labels. In that case factorisation ranks would have to be substantially larger in order to produce somewhat comparable models. See Žitnik & Zupan (2013) 24 for a detailed explanation of the algorithm. Disease class assignment. Each factorisation run produces a set of matrix factors that reconstruct the three relation matrices in our model. For disease association discovery, we are interested in approximating R 12 <G 1 S 12 G T 2 , specifically factor G 2 that contains meta profiles of DO terms and is used to identify classes of diseases. Class membership of a disease is determined by maximum column-coefficient in the corresponding row of G 2 . This is a well-known approach in applications of non-negative matrix factorisation 30,31 . A binary connectivity matrix C is then obtained from class assignments with C ij set to 1 if disease i and disease j belong to the same class (see algorithm in Figure 1-B). Repeating factorisation process 15 times with different initial random conditions and factorisation ranks gives a collection of connectivity matrices, C (i) , i g 1, 2, …, 15. These are averaged to obtain the consensus matrix C that is then used to assess reliability and robustness of disease associations. The entries in the consensus matrix range from 0 to 1 and indicate the probability that diseases i and j cluster together. If the assignment of diseases into classes is stable, we would expect that the connectivity matrix does not vary among runs and that the entries in the consensus matrix tend to be close to 0 (no association) or to 1 (full consensus for association). To recover informative and relevant disease associations, we are interested in diseases with high values in the consensus matrix. The process is outlined in the algorithm given in Figure 1-B. Disease associations scoring. Disease associations are scored by permuting the entries in gene-disease relation matrix R 12 and inferring the prediction model from the permuted matrix. Matrix R 12 encodes relations between genes and diseases, and via genes to the rest of the fusion model, so permuting its entries is sufficient for a complete rewiring of associations. To compute the p-values for the disease associations observed in our inferred model, we generate 70 consensus matrices (each one is averaged over 15 permutations of a disease-gene connectivity matrix, giving 70 3 15 5 1,050 unique matrices) and express the p-value of a particular disease association as the fraction of factorisation runs in which it was observed. Table 3 | Relative contribution of each data source to the fused model. Starting from the configuration given in Figure 3-A, we remove individual data sources, re-run the data fusion algorithm, and compute residual sum of squares (RSS) and explained variance (Evar) changes for the resulting model. For example, if we remove protein-protein interaction data (column labelled ''H 1 ð Þ 1 ''), the quality of the resulting fused model drops by 2.0% (i.e. RSS increases by 2.0% and Evar decreases by 2.0%). The column labelled ''H 4 1 R 14 '' corresponds to the configuration in which we remove all drug-related information from the system, while the one labelled ''H 4 '' indicates that only drug sideeffects information was removed Data
2015-08-11T20:29:18.000Z
2013-11-15T00:00:00.000
{ "year": 2013, "sha1": "52bed77e2fdbf56dec481a9149beaa4beebc4c24", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep03202.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "188c48b75a440022120f02ce88a80e848c872533", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244346161
pes2o/s2orc
v3-fos-license
Development of the Sakiika Transport Test: A Practical Screening Method for Patients with Oral-phase Dysphagia Purpose: This study aimed to develop a simple screening test for mastication, “the Sakiika (squid jerky) transport test (STT), which evaluates the vertical jaw movement coordinated with the lateral tongue movement during stage I transport, and investigate the possibility of its clinical application. Methods: The study included 73 people with dysphagia (mean age, 78.5 ± 7.8 years; median age, 79.0 years; interquartile range, 75.0–84.0). The STT evaluated the ability of a participant to transport a piece of squid jerky placed on the midline of the tongue to the molar region. The STT score was defined as the number of vertical jaw movements occurring as the tongue transported food to the molars. A cutoff value was set by comparing the STT scores with masticatory function evaluated via a videofluoroscopic swallowing study and with food texture evaluated using the Food Intake LEVEL Scale (FILS). Results: The STT scores counted by the two examiners had a κ coefficient of 0.79, indicating good reliability. The STT score was significantly associated with both the presence of masticatory movement ( p = 0.019) and food texture classified by FILS ( p = 0.032) at cutoff value of “3” (3 vertical movements). The STT showed 62% sensitivity and 75% specificity for masticatory movements. Conclusion: The STT could be a useful screening test to assess the presence or absence of food transportation to the molars for mastication in older patients with dysphagia. In addition, the STT could be useful in identifying the need to modify food texture to meet functions. test recommended by the International Dysphagia Diet Standardisation Initiative being performed on this test food. The results showed that the texture of the test food corresponded to level 7: Regular Easy to Chew [12], meaning that it was soft, tender and easily chewable by even older adults with reduced muscle strength. The participant was seated in a natural head position during the VFSS. The anterior-posterior image was filmed from the front of the participant. The examiner instructed the participant to inject the test food from a spoon on the midline of the participant’s tongue using their lips and anterior teeth The VFSS was recorded as a video in AVI format at 30 frames/s. Introduction Masticatory disorder is a cause of malnutrition [1] and choking [2] among older individuals requiring long-term care, and thus has a major impact on life prognosis [3]. The presence of occlusal support and the number of teeth present have been shown to be involved in the maintenance of masticatory function [4,5]; therefore, it is important to prevent dental diseases in older individuals to help them maintain a good quality of life. In addition to occlusal support and the number of teeth present, oral motor function influences masticatory function [6,7]. Oral function, including tongue motor function, declines with age [8] and can be impaired by sequelae of cerebrovascular or neurodegenerative diseases [9]. For these reasons, in older people especially, it is necessary to address not only masticatory disorders caused by tooth loss (organic masticatory disorder) but also masticatory disorders resulting from a decline in oral function (motor masticatory disorder) [6]. Decline in coordinated movement while eating, principally in the lips, tongue, cheeks, and mandible, causes a decrease in masticatory performance, making bolus formation difficult. As a result, food enters the pharynx while in a state of incomplete mastication, increasing the risk of choking and aspiration. In Japan, approximately 4,000 people die every year from choking on food [10], and masticatory disorder is suspected to be a contributing factor. To prevent such choking and aspiration in older people or patients with dysphagia, the physical properties and size of the food ingested needs to be modified [11,12]. For older individuals with decreased muscle strength, ill-fitting dentures, or inadequate masticatory ability, tender food that can easily be crushed by chewing is recommended. In addition, for individuals who cannot perform grinding movements with their molars due to issues with tongue rotation and lateral movement, food with a soft consistency that can be squeezed by pressing it with the tongue against the palate is recommended. For individuals who have difficulty with this method, pureed or finely chopped food that can be swallowed directly is recommended. Such food texture types should be determined based on an assessment of oral function. Oral function tests that have been reported include evaluation methods using tongue pressure, bite force, or masticatory efficiency [13][14][15]. However, it is difficult to conduct these oral function tests in patients with issues such as severe cerebrovascular disease or dementia, as they are often unable to follow the examiner's instructions [16]. Moreover, detailed examinations using videofluoroscopy or video endoscopy are highly reliable assessments of swallowing function, but many facilities are unable to perform these examinations due to lack of equipment or other reasons. When solid food requiring mastication is ingested, the first movement observed is stage I transport [17,18]. In stage I transport, food ingested in the midline at the maximal jaw opening is pulled backward by the tongue, lifted upwards by elevation of the tongue surface, and pushed laterally against the occlusal surfaces of the molars by the rotation of the tongue. Stage I transport is considered to be complete when the jaws close [19]. In this study we focused on the jaw movement of stage I transport. We considered that the number of tongue rotations and lateral movements necessary for transporting food from the middle of the tongue surface to the occlusal surface of the molar region was related to the number of vertical jaw movements, and that an increase in the number of jaw movements would indicate the degree of difficulty in the onset of masticatory movement. The purpose of this study was to develop a new oral function test to count the jaw movements required for transporting test food to the molar region and to verify whether this test would be useful for evaluating the tongue rotation and lateral movement required for mastication. Participant characteristics The subjects consisted of 108 patients aged ≥60 years who presented to the outpatient clinic with a chief complaint of dysphagia and were assessed for swallowing function via a videofluoroscopic swallowing study (VFSS) throughout October 2019 to March 2020. The primary disease-causing dysphagia and dental charts of each patient were collected from their medical records. Thirty-five patients were excluded from the study for the following reasons: (1) they could not eat soft food due to dysphagia (19 cases), (2) they were classified as Eichner [20] B-4 or C (10 cases), and (3) they had no anterior tooth contact (6 cases). Ultimately, 73 patients (44 men and 29 women) with a mean age of 78.5 ± 7.8 years were selected to participate. The participants were characterized by evaluation of their activities of daily living (ADL), eating function, and cognitive function. ADL was evaluated using the Barthel index [21], with a score ranging from 0 to 100, to which either the participant or their guardian responded. Evaluation of eating function was performed using the Food Intake LEVEL Scale (FILS) [22], which evaluates the severity of dysphagia as follows: Levels 1-3, no oral intake; Levels 4-6, oral intake and alternative nutrition; Levels 7-9, total oral intake; and Level 10, within a normal range. Levels 4-7 indicate oral intake of easy-to-swallow food even without mastication [22]. Cognitive function was evaluated using the Mini-Mental State Examination (MMSE) [23], with scores rang-ing from 0 to 30. Participants were asked the MMSE questions in a direct, face-to-face meeting with an examiner. Sakiika (squid jerky) transport test (STT) The test food used in this study was commercially available sakiika (dried squid jerky; Takuma Foods Co., Aichi, Japan). It was cut in half, with the resulting pieces measuring 108 mm x 9.0 mm x 2.0 mm (Fig. 1). Dry-processed squid jerky was selected to prevent accidental ingestion or aspiration as it would not be easily bitten through. During the STT, the examiner filmed the movements of the lips, tongue, and jaw of the participant using a video camera (HDR-CX680; Sony, Tokyo, Japan) held in a fixed position and recorded in AVCHD format. The participant was seated in a relaxed state with one end of the test food (3 cm from the edge) placed on the midline of the tongue surface. The examiner lightly held a different side of the test food and instructed the participant to chew the food with their molars. The examiner gave no directions regarding which side of the mouth the participant should transfer the test food. The number of vertical jaw movements made by the participant in coordination with the tongue when transporting the test food from the midline to the molar region (STT score) was counted using the video recording. Counting was performed by two dentists (Examiners A and B), both of whom were unaware of the feeding and swallowing abilities of the participants. Examiner A was a dentist specializing in dysphagia rehabilitation, and Examiner B was a dentist with no experience in dysphagia rehabilitation. The measured values were based on data from Examiner A. Masticatory ability evaluation by videofluoroscopic swallowing study (MEVF) VFSS was performed using a fluoroscope for evaluation of masticatory ability (VC-1000; Hitachi Medical Corp., Tokyo, Japan). Considering the risks of aspiration and possibility of choking on bite-sized pieces of food [11] a frustum (with a square base measuring 15.0 x 15.0 mm; weight, approx. 4 g) was used in the VFSS to lower the risk of choking. The test food was prepared by shaking 100 ml of water with 50 g of barium sulfate (Baritop P; Kaigen Pharma Co. Ltd., Osaka, Japan), 20 g of granulated sugar, and 6.8 g of gelling agent (Softia Tes Cup; NUTRI Co. Ltd., Mie, Japan). Figure 2 shows the fork pressure test recommended by the International Dysphagia Diet Standardisation Initiative being performed on this test food. The results showed that the texture of the test food corresponded to level 7: Regular Easy to Chew [12], meaning that it was soft, tender and easily chewable by even older adults with reduced muscle strength. The participant was seated in a natural head position during the VFSS. The anteriorposterior image was filmed from the front of the participant. The examiner instructed the participant to inject the test food from a spoon on the midline of the participant's tongue using their lips and anterior teeth The VFSS was recorded as a video in AVI format at 30 frames/s. VFSS is undertaken routinely at the clinic for evaluating dysphagia in regular clinical practice. The recorded video was used to evaluate whether the test food was a) masticated, b) only mashed by the tongue pushing it against the hard palate [24], or c) swallowed whole. Participants deemed to have masticated the food were assigned to the masticating group, while those evaluated as having only squeezed or swallowed the food whole were assigned to the non-masticating group. Analysis of masticatory movement was performed in a blinded manner separate from the STT by a dentist specializing in dysphagia rehabilitation. Oral function testing To compare the acceptability of the STT carried out in this study with other existing oral function tests, participants also underwent a tongue pressure test, an oral diadochokinesis test, a bite force test, and a masticatory performance test using gummy jelly. In all of these tests, participants were given detailed, face-to-face explanations of the test procedure by the examiner prior to measurement. Tongue pressure test Tongue pressure, which refers to the force exerted by the tongue against the palate, was measured using a plastic balloon probe and a tongue pressure instrument (TPM-01; JMS Co., Hiroshima, Japan) following the method of Tsuga et al. [25]. Participants were seated and instructed to press the balloon with the tongue against the palate with maximum effort for 7 s. Measurements were taken three times, and the peak value was recorded. Oral diadochokinesis test The speed of the lip and tongue movements was measured using an oral function analyzer (T.K.K. 3351; Takei Scientific Instruments, Tokyo, Japan). Participants were asked to pronounce the /pa/, /ta/, and /ka/ sounds as quickly as possible for 5 s each. Each sound was measured once, and the number of times the words were pronounced per second was recorded [26]. Bite force test The bite force, which represents the strength of the masticatory muscles, was measured using a pressure-sensitive film for bite force measurement systems (Dental Prescale II; GC Corp., Tokyo, Japan). The participants were seated, and the pressure-sensitive film was lightly placed between the upper and lower dental arches. Participants were instructed to bite the pressure-sensitive film as hard as possible for 3 s when cued by the examiner. The pressure-sensitive film was then scanned by a designated device and inputted to a computer. Bite force was analyzed using bite force analysis software (Bite Force Analyzer; GC Corp.) [27]. Masticatory performance test Masticatory performance was evaluated using gummy jellies containing glucose according to the method described by Shiga et al. [13]. Participants were seated and instructed to chew a gummy jelly on their preferred chewing side for 20 s. After that, the participants were instructed to hold 10 mL of distilled water in their mouth, and then spit the water into a filtered cup. The glucose content of the filtered extract was measured using a measuring device (Gluco Sensor GS-II; GC Corp.) to determine masticatory performance. Statistical analysis First, we evaluated the effect of cognitive function on the ability to perform each test. For each oral function test, participants were divided into performing and non-performing groups. The Mann-Whitney U test was used to compare the MMSE scores between the two groups for each test. For the STT, participants were divided into transporting and non-transporting groups, and MMSE scores between these two groups were also compared. Next, the reliability of the STT was evaluated by calculating the κ coefficient of inter-examiner reliability in the STT score. Several cutoff values were set for the STT score, and they were classified into two groups according to the cutoff values. Participants who could not transfer the test food or whose STT score was above the cutoff value were classified into the high score/non-transport group, and those with STT scores below the cutoff value were classified into the low score group. Based on the MEVF, participants were also classified into masticating and non-masticating groups. Based on the FILS, participants were classified into a texture-modified diet (TMD) without mastication group (level 7 or lower) and food with mastication group (level 8 or higher). To examine the cutoff value for STT, the two STT score groups and the two MEVF groups were compared using Fisher's exact test and the Youden index [28]. Furthermore, the STT score classifications that showed significant differences were compared with the two FILS groups using Fisher's exact test. The sample size was calculated using G*Power 3.1 for Windows (Kiel University, Kiel, Germany) [29], with the α level (type I error) set at p = 0.05, and the power (type II error) set at 0.80. The effect size was assumed to be 0.30 (moderate). Accordingly, an analysis with ≥88 participants was planned. Data were analyzed using the Japanese version of SPSS for Windows version 26.0 (IBM Corp., Armonk, NY). All results were presented as the median and interquartile range (IQR) since the data showed a non-normal distribution. A significant difference was set as α = 0.05. Ethical approval This study conformed to the principles described in the World Medical Association Declaration of Helsinki (2002) and was approved by the ethics committee of Nippon Dental University (NDU-T2018-10). Informed consent was obtained from all patients or their legal guardians. Participants The median age of the 73 participants was 79.0 years (IQR: 75.0-84.0 years), and the mean age (± standard deviation) was 78.5 ± 7.8 years. Of these, 44 participants (60.3%) were men. The primary diseases that caused dysphagia were cerebrovascular disease (21 cases), progressive neuromuscular disease (13 cases), oropharyngeal cancer (4 cases), gastric or esophageal cancer (2 cases), and diseases classified as "other" (34 cases). The median value of the Barthel index was 100.0 (IQR: 80.0-100.0). For FILS, 1 participant was level 5, 1 participant was level 6, 20 participants were level 7, 44 participants were level 8, and 7 participants were level 9. The test scores of the participants able to perform each test are listed in Table 1. Of the participants able to transport the test food to the molar region: 33 received score 1 on the STT, 17 scored 2, 11 scored 3, and 5 scored ≥4. Seven participants were unable to transport the test food. The median STT score of the 66 participants able to transport the food was 1.5 (IQR: 1.0-2.3). In the MEVF, 60 participants masticated the test food, 8 only squeezed the food with the tongue, and 5 swallowed the food whole. Table 2 shows a comparison using the Mann-Whitney U test of MMSE scores for the participants who could and could not perform the STT or other oral function tests. Except for the STT and MEVF, the MMSE scores for all oral function tests were significantly lower in the non-performing group than in the performing group. Comparison of MMSE score between participants who could and could not perform oral function tests The median MMSE score of the 73 participants who could perform the STT was 26.0 points (IQR: 22.0-29.0). In the STT, the median MMSE score of the 66 participants able to transport the test food was 26.0 points (IQR: 22.0-29.0), and the median MMSE score of the 7 participants unable to transport the test food was also 26.0 points (IQR: 18.0-27.0). There was no significant difference between the two groups (p = 0.43). Reliability between the STT examiners For STT scores, the two examiners agreed on 61 of the 73 participants (30 participants with STT score 1, 15 with score 2, 9 with score 3, and 7 participants unable to transport the test food). The interexaminer κ coefficient for the STT scores was 0.79. Examination of cutoff value and validity of STT score using MEVF Four patterns were set with the STT score cutoffs at two, three, four, and five. Participants were divided based on MEVF findings into the masticating group (n = 60), in which mastication movements were observed, and the non-masticating group (n = 13), in which only squeezing the test food or swallowing the test food whole without mastication were observed. Table 3 shows a cross-table of the two STT groups and the two MEVF groups, the results of Fisher's exact test, and the Youden index for each STT score cutoff. A significant association with the two MEVF groups was found when STT score cutoffs were set at three (p = 0.019), four (p = 0.001), and five (p = 0.001). Examination of cutoff value and validity of STT score using MEVF The relationship between the two STT score groups and the two FILS groups was investigated for the three STT score cutoffs of 3, 4, and 5 with significant associations found between the two STT score groups and the two MEVF groups. Table 4 shows a cross-table of the two STT groups and the two FILS groups, and the results of Fisher's exact test for each STT score cutoff. With participants calssified according to the STT score into the STT score ≤2 group and the STT score ≥3/non-transport group, a significant association with the two FILS groups was found (p = 0.032). Discussion This study was a pilot study to develop a new oral function screening test focusing on jaw movement during stage I transport, which was observed when solid food requiring mastication was eaten [19]. The STT evaluated the number of vertical jaw movements while transporting the test food to the molars. For the safe ingestion of solid and liquid foods, it is necessary to evaluate perioral movement based on food texture. To evaluate mastication, food with a certain degree of firmness is required, rather than liquid or pastetype food that is quickly transported to the pharynx and swallowed after ingestion. Furthermore, considering the patient's pharyngeal function and the possibility of accidental ingestion, the test food needs sufficient size and hardness to avoid being easily bitten off and swallowed. In this study, squid jerky sticks of approximately 100 mm in length were used as the test food. During the test, one end of the test stick was placed outside the oral cavity. Thus, it minimized the risk of accidental ingestion or aspiration during the mastication test in older people and people with dysphagia. Additionally, the examiner could observe tongue movements indirectly through the movement of the test stick extending from the subject's lips. Squid jerky (sakiika in Japanese) is a traditional food in Japan, and its taste stimulates the patients' appetite [30]. The STT could be easily performed on patients with cognitive decline, as the test procedure is simple. Previous oral assessments such as the tongue pressure test, oral diadochokinesis measurement, bite force measurement, and masticatory performance test are sometimes complicated to perform in patients with cognitive decline. In particular, the procedure for masticatory performance tests using gummy jellies seemed complicated for the older adults with dysphagia who participated in the study. The test method developed in this study should be feasible in other countries using foods with similar properties, such as beef jerky or dried fruit. Considering screening tests that should be fast, safe, and inexpensive [31], the STT meets these requirements. The reliability, validity, sensitivity, and specificity of the STT were examined to assess the utility of the STT as a screening tool. The STT scores counted by the two examiners showed a high κ coefficient, indicating that the STT offers good reliability. The STT score cutoff was investigated as an effective screening tool for evaluating the tongue rotation and lateral movement required for mastication in patients with dysphagia. The validity of the STT was verified by comparing the STT scores and findings from MEVF. The anteroposterior images of the VFSS as the gold standard in dysphagia evaluation [32] can show movements of a test food containing a contrast agent after injection on the midline of the anterior tongue with a spoon. It is then possible to assess whether the food is carried by the tongue to the molar region and chewed by the vertical movement of the teeth, whether it is only squeezed against the palate by the tongue surface, or whether it is transported to the pharynx in its original state (swallowed whole) [24]. With inadequate transport defined as an STT score of ≥3, ≥4, or ≥5, significant associations were found with the masticating and non-masticating groups as evaluated by MEVF. The actual swallowing function in many older individuals living at home or in facilities has been reported in the past without any testing being performed, and in many cases, resulted in discrepancies between feeding, swallowing function and nutritional intake [33]. We considered it necessary to establish a safe cutoff value for the STT score so that it could be used as an indicator for screening when modifying the food texture in such cases. We considered that a safe cutoff value for the STT score should be set to allow screening with the STT as an indicator when modifying the texture of food for older individuals at home or in facilities. The STT was considered more sensitive when the participants with a high score or those evaluated as non-transporting in the STT occupied a high proportion of the participants found to be non-masticating by MEVF. In addition, the STT was considered safe when the participants evaluated by MEVF as non-masticating occupied a low proportion of the participants with a low score in the STT, meaning that the negative likelihood ratio was low. For this reason, an STT score of 3 was considered valid as the cutoff. In this instance, the STT had a sensitivity of 62% and a specificity of 75%. The negative predictive value was 90%. In other words, if the result was a negative STT, there were no significant issues with the tongue movements necessary for mastication. The validity of the STT as a screening tool was examined by comparing the intake status of "foods that do not need to be masticated" to "foods that need to be masticated" in the FILS. In summary, the association between the STT score and the FILS food texture classification with or without mastication was investigated. When inadequate transport was defined as an STT score of ≥3 or non-transport, a significant association was demonstrated between the STT and the FILS food texture groups. These results indicate that an STT score cutoff of 3 was valid when examined in terms of both masticating and adjusting food texture. One factor that limited the effect of this study was the small number of participants. We were unable to secure a sufficient number of patients because we excluded those with no molar occlusion or defective anterior teeth due to the requirements of the test. Consequently, the results may have been affected by a type II error. Future studies should be conducted with a larger number of participants. The STT may also be useful for adjusting food texture. The STT is a screening test that can be performed simply using another type of food with the same texture as squid jerky. While other methods for screening mastication have existed in the past, the STT is a safe and easy method, and this study may be of considerable significance to clinical practice. Conclusion The STT is a screening test that assesses the ability to transport food to the molars for mastication, focusing on stage I transport. The STT score is determined by counting the number of vertical jaw movements in conjunction with the rotational and lateral tongue movements when transporting food to the molars. The STT score showed a high negative predictive value when compared to the masticatory performance assessed via the VFSS. This study suggested that the STT was useful in identifying individuals who could perform the tongue rotation and lateral movements required for mastication.
2021-11-19T06:17:07.560Z
2021-11-18T00:00:00.000
{ "year": 2021, "sha1": "4af285e7829e0671cc2290d483dfd07f8ab87327", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/jpr/advpub/0/advpub_JPR_D_20_00290/_pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "577e6539c893c07fc944baaa2df5b7068149d3a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237734031
pes2o/s2orc
v3-fos-license
Foot Disorders in Nursing Standing Environments: A Scoping Review Protocol Musculoskeletal disorders can be significantly disabling, particularly those related to work, when the underlying mechanisms and clinical variables are not well known and understood. Nurses usually remain in standing positions or walk for long periods, thus increasing the risk for the development of musculoskeletal disorders, particularly on the foot, such as plantar fasciitis or edema. This type of disorders is a major cause of sickness, absence from work, and also dropout ratios among nursing students, which contributes to the shortage of nursing professionals. This review will address foot disorders that arise from prolonged standing in nursing professionals and describe the main clinical parameters characterizing them, with exclusions for other health professions or disorders with other identified causes. English, French, Portuguese, and Spanish published studies from 1970 to the current year will be considered. The review will follow the JBI methodology, mainly though the PCC mnemonic, and the reporting guidelines for Scoping Reviews. The search will include main databases and relevant scientific repositories. Two independent reviewers will analyze the titles, abstracts, and full texts. A tool developed by the research team will aid in the data collection. Introduction Musculoskeletal disorders (MSDs) can be described as health problems related to the locomotor system, affecting muscles, joints, tendons, the skeleton system, the vascular system, ligaments, and nerves [1,2]. When related to a professional activity or work-related event, MSDs are named work-related musculoskeletal disorders (WMSDs) [3], which represent a major occupational health concern for the nursing profession [4,5], accounting for 60% of the reported health occupational injuries [3]. In this sense, nurses are considered to be at a high risk of WMSDs [2,4,5], namely having their feet exposed to prolonged standing and walking for long distances [6], with a high prevalence of MSDs, when compared to other professions [5,7,8]. 'Prolonged standing' and 'prolonged walking' concepts can be included in a broader definition-'standing environments'. According to Anderson, Nester, and Williams [9], 'prolonged standing' is defined as spending at least 5% of occupational time standing. Regarding 'prolonged walking', Stolt and colleagues [1] report that nurses walk an estimated distance of 4-5 miles in a 12 h shift, thus spending most of their time working time on their feet. 'Standing environments' characterize a potentially aggressive context in the nursing profession, particularly regarding foot health. In fact, 'prolonged standing' has a 1.7-fold risk for foot pain [9,10] and, according to Reed and colleagues [5], foot/ankle MSDs were the most prevalent conditions experienced by nurses in the previous 7 days, the second most prevalent MSD to impair nurses' physical activity, and the third most prevalent MSD after lower-back and neck problems. Additionally, more than 50% of nurses reported foot/ankle MSDs in the preceding 12 month period [1,5]. As stated by a recent narrative literature review in lower extremity MSDs in nurses [1], little is known about what types of lower-extremity problems nurses are facing during their working time and studies with an exclusive focus on feet disorders in nurses are scarce. Apart from nurses' personal health and quality of life, MSDs seem to be the leading cause of sickness-related absence from work worldwide [1,16,17], also reducing worker productivity [18]. As a matter of fact, in the Fifth European Working Conditions Survey [19], it is stated that exposure to physical risk factors can negatively impact workers' health and well-being, which is one of the European main social policies and a European Union (EU) core competence. Furthermore, the demanding physical workload that leads to MSDs is usually a cause of nursing student dropout and early exit of nurses starting their careers, thus contributing to the shortage of nursing professionals [20]. In this sense, and although several studies have identified feet problems as MSDs among nurses, a preliminary search of PROSPERO, MEDLINE, the Cochrane Database of Systematic Reviews, and the JBI Evidence Synthesis has revealed that there are no conducted, current or underway, scoping or systematic reviews that clearly describe the foot disorders in nursing professionals and related clinical parameters. Moreover, there are no studies that clearly identify the causes of pain on the foot. On the other hand, podiatric evaluations are poor, which limit a more detailed knowledge of the phenomenon under study. Thus, in order to develop future interventions to address this issue, it is important to map the foot disorders in nursing standing environments and their main clinical parameters, which are the main objective of this scoping review. Review Question(s) The review questions are 'What foot disorders do nurses who work in standing environments have?' and 'Which are the main clinical parameters that characterize those foot disorders?'. Materials and Methods The proposed scoping review will be conducted in accordance with the JBI methodology for scoping reviews [21,22], considering the PCC mnemonic, where P stands for 'participants', C for 'concept', and C for 'context'. Regarding participants, this review will consider studies that include all nursing professionals who are exposed to standing environments in acute care contexts, namely, hospital units. The review will exclude other health professionals, or those who usually work in a stationary environment for most of the time (e.g., clinical appointments, primary care). The concept in study refers to 'foot disorders'. It is widely considered that the foot is one of the most dynamic structures in the human body, acting in concert with the rest of the body during standing and movement [23]. According to Hagedorn and colleagues [24], foot disorders are related to foot posture and foot function and appear in the presence of an imbalanced event between the various internal structures, including the ankle, causing a structural lesion or affecting tendons and ligaments, also involving pain. Thus, in this review, we will address disorders of the foot/ankle as a whole, including pain as an isolated disorder, as it is the most frequent reported symptom. As for the context, this review will consider studies that address disorders that occur within standing environments, as defined earlier in this paper. Further contextual descriptions and elements that might enrich the definition of 'standing environments' will also be reviewed and reported by the authors. Types of Sources and Search Strategy This scoping review will consider quantitative, qualitative, and mixed methods study designs for inclusion. In addition, systematic reviews and narrative review papers will be considered for inclusion in the proposed scoping review. Quantitative studies to include are those considered to have any experimental study design (e.g., randomized controlled trials, quasi-experimental studies) and also observational studies (e.g., descriptive, cohort, cross-sectional studies). Qualitative studies include those that are mainly focused on qualitative data, such as phenomenology and grounded theory, for example. The search strategy will aim to locate both published and unpublished primary studies, reviews, text, and opinion papers. An initial limited searched of MEDLINE (PubMed) was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles and the index terms used to describe the articles were used to develop a full search strategy for PubMed ( Table 1). The search strategy, including all identified keywords and index terms, will be adapted for each included information source. The reference lists of articles selected in the study will be screened for additional papers on the topic of interest. Limited to studies from 1970 until the present. Note. A similar strategy will be used for the remaining databases. Articles published in English, Portuguese, Spanish, and French will be included. Articles published from 1970 to the present will be included, as the oldest nursing theory and model for occupational health nursing dates from 1977 [25]. In this sense, we believe that the topic of nursing personnel's health and self-care related to work was more present in the scientific community from the 1970s onward. The databases to be searched include MEDLINE, CINAHL, Latindex, SciELO, Web of Science, Cochrane Database of Systematic Reviews, JBI Database of Systematic Reviews and Implementation Reports, and PROSPERO. Sources of unpublished studies and gray literature to be searched include Google Scholar, Open Grey, OpenDOAR, and ProQuest Dissertation and Theses. Study/Source of Evidence Selection Following the search, all identified records will be collated and uploaded into Mendeley and duplicates removed. Following a pilot test, titles and abstracts will then be screened by two independent reviewers for assessment against the inclusion criteria for the review. Potentially relevant papers will be retrieved in full and their citation details imported. The full text of selected citations will be assessed in detail against the inclusion criteria by two independent reviewers. Reasons for exclusion of full-text papers that do not meet the inclusion criteria will be recorded and reported in the scoping review. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion or with a third reviewer. The results of the search will be reported in full in the final scoping review and presented in a Preferred Reporting Items for Systematic Reviewers and Meta-analyses for Scoping Reviews (PRISMA-ScR) flow diagram [26]. Possible disagreements between reviewers will be resolved through discussion or with the inclusion of a third reviewer. As an additional methodological step, studies identified from the reference list of previous included studies will be assessed based on title and abstract. Data Extraction Data will be extracted from papers included in the scoping review by two independent reviewers using a data extraction tool developed by the reviewers. The data extracted will include specific details about the population, concept of interest, and context relevant to the review question. A draft extraction tool is provided ( Table 2). The draft data extraction tool will be modified and revised as necessary during the process of extracting data from each included paper. Modifications will be detailed in the full scoping review. Any disagreements that arise between the reviewers will be resolved through discussion or with a third reviewer. Authors of papers will be contacted to request missing or additional data, where required. Data Analysis and Presentation Data will be presented in a tabular manner, taking into account the study research question. A descriptive summary will follow the results, unfolding how they answer the proposed questions. Data will be summarized through the following information: author(s), year of publication, country of origin, purpose, population, sample size, methodology, walk-ing hours (if applicable), standing hours (if applicable), foot disorders, and respective clinical parameters. Contributions to the Topic Differences in the way of defining foot disorders, combined with the multiple compound working contexts and personal traits, generate a great variability of clinical podologic prevalence rates, thus increasing complexity when interventions are needed at an occupational level. With the conduction of this scoping review, we expect to further enhance the knowledge related to nurses' foot disorders in standing environments, particularly defining characteristics and related phenotypic clinical parameters in this population. Additionally, the intended description will allow a better understanding of the influence of nursing labor contexts on foot health and consequently provide guiding principles for further research and for therapeutic interventions within the scope of occupational health. Data Availability Statement: The data presented in this study are all available within this manuscript.
2021-09-27T20:55:48.549Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "b3e9649c2c19ce6c0ec45108d4a2764cfa23fced", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2039-4403/11/3/55/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ea520b910d3c471db6da98efeb53ff6bb8c1ab0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216895096
pes2o/s2orc
v3-fos-license
The Language of Essence and Inference in Mental Health: Natural Law- Legal Positivism-Cognitive Dissonance K-12 Teachers are experiencing more stress than ever before, and the problem isn’t expected to be going away anytime soon. In this article, I want to show how it is possible to re-stimulate damaged neural pathways using insights drawn from a newly developed cognitive model, and, to show anecdotal evidence from an ongoing longitudinal qualitative study that began in 2010. The original study was started to seek support and to validate the hypothesis that; cognition is equal to an emotional response to absurd notions, thoughts, idea, etc., and that; where the individual (mind) achieves resolve, the physical brain would rest, and the body would enter relaxation phase or the; “relaxation response” [1], equal to post fight or flight, thus allowing toxins to freely flush from the body through the urine as vital organs would become unstressed, and that resolve may be inherently human. It is in the study of Dementia and Alzheimer’s disease which shows us that Neural pathways in the brain can become damaged, and/or lay dormant, and it is in this research and study that we believe that we have discovered a novel approach to assist in the problem of K-12 Teacher stress and mental health, by re stimulating dormant neural pathways which inherently make it possible to exist within this modern day environment without succumbing to the adverse effects of stress Introduction Beginning in 1987, hypothetical cognitive model/theory C=ea 2 , was researched and developed over a thirty-year period of multidisciplinary experiential research and study by an "individual investigator" (Bejan, 2008), fist realized in 2009, C=ea 2 brings us closer to understanding the language of the subconscious mind, positing that; "a truer understanding of intent would have to involve empathy [2] as it is that "the ability to accurately infer the thoughts, intentions and emotional states of others has often been associated with the concept of empathy" (Morrison 2004). For each party to gain an adequate understanding of the other and that of what is being conveyed, under the influence of legal positivism regarding the teacher student relationship, empathy would play a vital role in communication, and serve as a buffering type of affect to offset the rigidity of some educational environments. Over a prolonged period, a society governed by legal positivism or; "the rule of law", will produce rigid systems [3] which shows up in political correctness, only to interfere with intimacy that, in turn, will create a static environment without empathy, as well as influence it's citizens to become cold and indifferent. At the other end of the spectrum and in comparison, natural or "moral law" governing a society will produce extremism through fundamentalism and dictatorship, and society over all would have a "cult like" essence. In a system or a society, Legal Positivism (LP) springs forth from Natural Law (NL) because of ignorance of traditional values and practices (ritual), and, the individual citizen living under LP and under stress, will investigate a return to NL seeking respite and relief from the stress within the mind, and the strain on the emotions, it is this that C=ea 2 points out as one cause of cognitive dissonance among some individuals, and today this is having adverse effects on the mental health of the teachers and, the students, as well as the rest of society in the form of collateral damage i.e. where an individual cannot act and behave in accordance to what can be understood as natural or "moral law", the individual will be disciplined according to legal positivism or "the rule of law" (cause and effect)". We have been informed and educated throughout history about the need for balance in our individual lives, and for a balanced society, which quantified would mean that even on the molecular level, we need balance, now where we point this understanding in the direction of the mind of an individual, it becomes the mind body problem, and it is only with a theory where we can expect to establish a balance, this research and C=ea 2 strongly suggests, and will prove it true that realizing that "balance" is an inherent ability to maintain the "balancing act". Thailand. At the time I was in the closing stages of a more than five-year long battle with major depression that had begun in 2006. In discussing C=ea 2 as a treatment for mental illness, I would like to discuss my own personal story, including the roots of my depression as well as the way in which information and insights gleaned from this study helped me determine ways of using the way my mind naturally worked for better mental health. The Roots of Depression Adequate For me, the roots of my struggle with major depression go deep. Early childhood abuse led to a diagnosis of Post-Traumatic Stress Disorder at the age of four. From that time onward social difficulties and the repercussions of abuse meant that the raw material for depression (a constant sense of anxiety and worry) were present, and periodically in my life I have known prolonged periods of depression of some severity. Given that depression is a way the body slows down to recover from dealing with too much stress and pressure, it is no wonder that someone who has lived the sort of life that I have has faced depression on a regular basis. Abuse brings with it a set of insoluble problems that require a great deal of thought. These problems include worries about love and identity, as well as ways of reacting to the world to defend ourselves from the dangers we face that many are simply unaware of. The added pressures and concerns make developing equilibrium more difficult, but there are other benefits that are often neglected as well. In 2006, my father died at the age of 59. His death prompted a public examination of my life and childhood and widespread knowledge of child abuse and prompted a great deal of concern about my own longevity and chances for happiness and success in love. For the next five years I struggled with major depression, including occasional suicidal ideation, compounded by problems in work and family. The general difficulties of life along with the specific difficulties of my life seemed impossible to deal with, to the point where creative writing was difficult. The Treatment and Cure In discussing this work, the most helpful aspect to me was an understanding of the larger process by which our thoughts feed on themselves and cause depression and ways that equilibrium can be found to deal with them. I found that for myself, a great deal of conversation and writing was necessary to work out the difficulties in my own mind. There was nothing too mysterious in that aspect of the treatment. What was of particularly greater interest was the physiological responses to the treatment. As my depression waned, for several weeks I had bad smelling and dark-colored urine, despite the large amount of water I was drinking, and the urine did not return to a normal color until after the feelings had returned to a state of basic mental health. Living abroad at the time and taking a close care of my health in that regard, I found the manifestation of systemic detoxification in relation to depression remission to be highly remarkable as a correlation. Wahidah Binti Abdrahim-Female-Malaysia Schizophrenia 2013 Hello, I am Wahidah from Malaysia. I am 23 years old. I have a very serious problem in life and wish I get better. 6 years ago, since after high-school, I suffer from a bad depression, because my fibromyalgia and bipolar disorder became worse when I was 17. And woke up every day from sleep, feeling so bad, feverish all time. And as far as I could remember, I never sleep well in 6 years. I always had bad dreams, maybe it is because of thinking too much. I live in Perlis Malaysia, and to be honest, I am in a closed place and closed-minded people, in which, it is hard to find a suitable therapist to talk. It is very sad that, my doctor said I must take the pills, Seroquel 600mg and Epilim 1000mg per day and some other painkillers. And they warned me, if I didn't take my pills...I will be ill again. I was admitted into psychiatric ward 5 times since 2009 until 2012. If I take the pills as prescribed by the doctors, I never can have a life like others do. I cannot go study, I simply cannot enjoy life March 29 / 2013 Last night was apparently a very short discussion. But I feel a little relief than I was before. I know I wasn't wrong to liberate myself from being controlled by any belief or religions. I wasn't wrong to ask people not to fear me with any hatred in the world. If it is about truth and sincerity, there is nothing to fear. And I know that I wasn't wrong to choose not to take Seroquel anymore. I believe that life is always possible. I believe I can get rid of my illness. And by believing in Miracles, life would be beautiful as it supposed to be. I wish to do everything I want. Go travels and enjoy the nature. Miracles shall happen. And I strongly believe in that. April/1/2013 It takes about 24 hours for me to reflect myself. I mean, to re-think and re-think again after what has happened to me regarding the belief. And so, with all the facts I collected from holy books, the history of International Journal of e-Healthcare Information Systems (IJe-HIS), Volume 5, Issue 1, June 2018 ancient people, and story of prophets, and by trusting my sense as a human being, and now I become stronger again. Not too depress and not too maniac. Middle, I eat middle, enjoy middle, do prayers and study religions middle, talk to my sister's middle much, and took medication lesser, and now, I think that I am much more normal than I was before, I sleep better. The brain is unlocked, and I am much happier!! :) April/3/2013. I feel much more calm, stronger. We are always where we need to be. I am too far from my best friends, and the only thing I can do is prayer. And never doubt on power of prayer, for prayer is such an energetic thought that may change the whole world. As simple as to believe that Hope never end. April/4/2013 I still remember I just came out of psychiatric ward, and went out with my boyfriend and his sister, my boyfriend put our picture together, and a 'pious' brother was angry at me He said, I better be insane than I break the law That was the reason why I tried suicidal, God loves me, I failed! I swallowed 40-45 pills at one time but only sleep. One day when I was on car, and I intend in heart to suicide when I reached home, Car suddenly broke down All of this made me believe in positive energy Great news, I had a so much better sleep. For years I never slept soundly, today I slept enough and woke up not because of my spine telling me to get awake. One sleep, good sleep, enough sleep, I am confident that I may recover from illness, and enjoy life again. The Hope is always with me, and I believe in Hope. :) April/6/2013 ^_^ 4 days consistently sound sleep :) Now is 4:23 am. I already slept for a few good hours and woke up early at this time. I was on vacation, alone, for past 2 days. My family unable to come with me, but I still enjoyed my vacation. I go to the beach, meet people, and last evening I returned home, then I went for hair color and SPA, and enjoying being wild and naughty sometimes :P Good days I had. And I begin to feel confident and more confident. The more I believe in myself, the more miracle seems to get attracted to me. And very good thing that, I be happy, excited, and not manic. Between January and April is the cycle of Manic. This year I t make it through, able to keep quiet. Final Entry April/12/2013 I am feeling happy with this study. And now, I get the answers I want from this study. This study makes me feel better, everything as I wrote on previous journal. This study really brought me into a new world. Because, it proves that whatever around me, and what I was taught during my childhood, everything is related to each other. Jon Elder-Male-Pennsylvania USA PTSD 2013 I'm 35 and work as a biomedical engineer. I was recently discharged from the Air Force with PTSD and have worked to investigate alternative methods to combat my anxiety; this has resulted in a great deal of grad school work, and independent study in orthomolecular psychology and the mind body problem from a psychoanalytic view. So glad to be here and I look forward to discussions. May 13, 2013 All things are connected, none more so than the relationships we share with our friends, family, and community itself. This relationship however goes both ways we not only effect those around us, but in turn are affected. In terms of correlational emotional distress as society has become more isolated and autonomous, I believe many factors from depleted toxic soil, air, and water and its subsequent manifestation in food like products that leave us depleted of crucial amino acids, vitamins and nutrients. An ironic mirror of the toxic soul of those who have sown the seeds of their own fate. May 13, 2013 I believe it is the best and possibly the only hope our planet has given the radical militarization and consolidation of power we've witnessed the last few decades. The best way of changing society begins with personal change toward wholeness, equilibrium and emotional maturity, and somehow spreading that catalyst virally to everyone around us. Only then will we defeat the old paradigm by making it obsolete. I was diagnosed nineteen years ago with autoimmune disease Ankylosing Spondylitis, which is chronic arthritis. At the time of the study it was during a period when I would go off my pain medication, and usually it is a week in hell but this time it was different, the pain was so minimal that I was able to begin lifting weights again and I had no depression and was able to sleep. After the bout of detoxifying, I felt much better and had a better sense of understanding life in general I would say. It's been about one month since I stopped urinating as much and the pain is not as intense as it was before I came into this study. I am looking forward to seeing how far this research can help me and every day I hope for total recovery. Paul Arsenault Depression, Anxiety 2018 The experience of being a part of this research has helped me to better understand how I make decisions and has had a positive and beneficial effect on my emotional and physical health After a period of discussions, I began to realize a sense of calm where in my life and because of the business that I am working in, I mostly stressed. I was surprised when I began noticing the color of my urine had changed and become darker than normal because I always try to keep well hydrated and drink lots of water. My urine had a foul odor, and I was going to the bathroom quite a lot more than I usually do. I was only involved with the study for about eight weeks until the detoxification started, and shortly after I began to feel clear headed and able to think things through with more accuracy. I would like to learn more about C=ea 2 to gain more knowledge of the brain and how it is connected to the body's health. The detoxification lasted quite a while and I was surprised because I have always felt that I eat well. I believe this research has great potential to help people with mental health problems as it has helped me with anxiety in such a short time. Joseph Richard Crant-Male-Canada Depression, Anxiety It was because of my own experience with mental health while growing up in Ontario Canada, that I first began researching however, It was not to find relief for myself, it was to protect my own children from negative affect of bullying, I believed that I could inform and educate them about why some people could be so mean, I had no idea that it would come to what it has today. All I want to say is that on the day where I had first noticed something different about how I was feeling, I literally blurted out; "is this what it's like to feel normal?" It felt like everything associated with fear, anxiety, depression, suspicion, superstition, was gone in an instant. A few weeks later, toxins started to flush from my body through the urine, and I thought I had an infection. A urine test did not show infection, the condition persisted for about three weeks with pain and pressure in the lower abdomen, frequent urination and foul odor. A second testing also indicated no infection. Natures Paradigm Soon after the detoxing episode, I had developed a "sense of detachment yet of calm" as reported here by case study Jon Elder, and the same as with Nathan Albright, "I noticed my sense of humor returning". With the weight loss, my body began to trim up and my core strengthened, lifting and carrying heavy things at work (construction) was much easier, Lee Riddolls; stated "I was able to begin lifting weights again". It took me a while but since the "flush" in 2009, and the same as what Paul Arsenault had reported here, "this research has helped me to better understand how I make decisions". environment, (C=ea 2 ), second we integrate and adapt to the environment by struggle to meet "Equilibrium", (Nash), and eventually, it becomes possible to live within this earth environment; conscious, in balance with nature, or " in tune with the universe", evolving into an efficient ecosystem, (E=mc 2 ), experience a second cognitive evolution, (C=ea 2 ), and complete the balance with nature, (Nash Equilibrium) [4] to live out the rest of our life in health. As biological organisms, it is understandable that it may be in our natural state to seek solitude and be at rest, and as cognitive organisms or "thinking beings", we must have ability to re-solve complex problems else we stagnate and retard or decay in some manner as aging poorly would dictate and show up in diseases. The current state of research of this theory is largely based on self-reporting from people like me that can be considered as a case study approach. We have stories of depression, therapeutic conversations that help re-orient attention from depression as a crisis to it being a response to the absurdity and difficulty of life that provides a period of rest where recuperation can be undertaken for the stresses of life, and where recovery was accompanied by the body ridding itself of what appeared to be toxins that were associated with the depression, after which there was a restoration to generally neutral to positive feelings. Since then, I have not had any prolonged period of major depression thus far, and recovery from somewhat low feelings lasting for several days at a time has also been associated by what appeared to be the body cleansing itself of certain toxins. I have not yet been able to have these tested, but that is something that I believe would be worthwhile in the future. Stories like my own, and that of others, have a certain power to them. For example, a paper published in the Journal of Behavioral Health Services in April 2011 found a positive role in selfreported mental health measures in predicting functional outcomes for veterans. It should be noted that just as I have struggled with PTSD since early childhood, so veterans too are often found to struggle with it, and this struggle is often related to other mental health issues with anxiety and depression. Placing one's story in a context often helps to make it easier to cope with, and it also can provide therapeutic benefit for oneself and for others. Although this approach is qualitative instead of quantitative, there are positive results from being able to express one's story and share it with others and, also, to gain insights from the stories of others, especially where there are similar patterns that may be recognized between a variety of self-reported stories. Nevertheless, there are some limitations in reliance upon self-reporting and the case study approach. An oft-repeated truism is that correlation is not causation, and there are limits to the evidence that can be gathered when one is limited to the casestudy approach. Questions of mechanism as well as numerical data are difficult to determine, and there can be a certain vagueness that comes from only being able to express one's experience in a story without there being any data that can be aggregated together and analyzed in detail as part of experimental research. In that light, one could see the efforts at helping people who have prolonged and/or deep periods of major depression ought to take advantage of as many approaches as possible, both qualitative approaches that allow them to report on their own mental and emotional state as well as quantitative approaches that can provide a detailed and data-driven understanding of how the recovery from major depression appears in various measurements. Suggestions for Future Research I would like to briefly discuss some suggestions for future research to further integrate this paradigm regarding depression into existing studies. As many of the cases so far in the body of research that Crant has developed so far in his studies of depression include what appears to be the passing of foulsmelling urine, urine analysis related to the recovery of major depression is an obvious area of potential research. Such analysis would be able to help relate depression to physical causes related to the chemical contents of the body and point to the importance of the body's natural systems in helping to preserve mental health. Likewise, the existing body of case studies, and further case studies that are undertaken, can be examined using correlational studies that seek to determine the common elements in the story. If similar processes and events can be found to occur in a sizable body of people recovering from major depression, then it may be possible to find certain avenues of approach for further research that would help to point out the mechanisms by which the body seeks to rest and recover through depression and then is able to rid itself of that which is dragging it down. On a less chemical and statistical level, we may view the therapeutic efforts of reframing thoughts and ideas about depression as an approach that shows some marked similarities to Cognitive Behavioral Therapy, a common approach undertaken in various mood disorders like depression and anxiety disorders that seeks to give the mind a greater amount of tools in order to better understand the absurdity of life and the need to be resilient in the face of life's stresses and difficulties. Finally, the coincidence of PTSD and depression in athletes and soldiers is something that has been noted in the groundbreaking research on CTE by Dr. Bennet Omalu, most famous for being the doctor who first discovered the problem with repeated brain trauma in sports. His papers on Chronic Traumatic Encephalopathy in athletes and veterans has suggested that traumatic experiences can cause the development of tau proteins in the brain that are associated with depression and other mental illnesses, which may provide a physiological basis for a great deal of our understanding of PTSD and related mental illnesses. These are all among the areas where future research may be very profitable [5]. Conclusion Hypothetical cognitive model/theory C=ea 2 supplies multiple perception models and insights, and when delivered in educational course study, discussion and discourse, is effective in the remission of mental health and other conditions in some individuals. Physical manifestation of systemic detoxification is possible where the individual can achieve resolution of complex human problems and/or issues when utilizing C=ea 2 as a model for decision making. C=ea 2 suggests being a pathway to permit the physical body enter relaxation response [6] where and Theta is sustained in the individual, and systemic detoxification becomes possible because of gaining new insight into human behavior.
2020-01-16T09:04:32.766Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "e9ca20e56278915449f424d4f74ac9d1a72e63db", "oa_license": null, "oa_url": "https://doi.org/10.20533/ijehis.2046.3332.2018.0019", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "16742526c3b044f64248aab817801e27669a2645", "s2fieldsofstudy": [ "Law", "Psychology", "Philosophy", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
254714189
pes2o/s2orc
v3-fos-license
Trophoblast Migration with Different Oxygen Levels in a Gel-Patterned Microfluidic System In the placenta, substances such as nutrients, oxygen, and by-products are exchanged between the mother and the fetus, and the proper formation of the placenta determines the success of pregnancy, including the growth of the fetus. Preeclampsia is an obstetric disease in which the incomplete formation of the placenta occurs, which is known to occur when there is an abnormality in the invasion of trophoblast cells. The invasion of trophoblast cells is controlled by oxygen concentration, and HIF-1α changes according to oxygen concentration, showing a difference in cell mobility. MMP-2 and MMP-9 are observed to be high in the endometrium involved in trophoblast invasion, and the expression is regulated according to the oxygen concentration. In this experiment, cell culture was conducted using a gel-patterned system with a hypoxic chamber. Before the chip experiment, the difference in the expression of MMP-2 and MMP-9 according to the oxygen concentration was confirmed using a hypoxia chamber. After that, trophoblast cells (HTR8/SVneo) and endothelial cells (HUVECs) were separated and cultured through a physical barrier through a hydrogel on a microfluidic chip. Cells were cultured in a hypoxic chamber under controlled oxygen levels. It was confirmed that the mobility of trophoblast cells in culture on the chip was upregulated in a hypoxic environment through oxygen control. This suggests that the formation of a hypoxic environment in the endometrium where the invasion of trophoblast cells occurs plays a role in increasing cell mobility. Introduction The placenta is a temporary organ that exchanges substances between the mother and the fetus during pregnancy, and the proper formation of the placenta determines the success of pregnancy [1]. Placentation begins with the trophoblast invasion, in which the differentiated trophoblast migrates into the endometrium and invades the material spatial artery [2]. In the early stage of trophoblast invasion, a low oxygen partial pressure of 1-3% O 2 is maintained in the early stages of pregnancy, and then the normal oxygen partial pressure is restored to a range of 5-8% O 2 due to spiral artery expansion [3]. However, in the case of preeclampsia, the spiral artery does not expand properly, and a hypoxic environment is continuously maintained [4]. This inadequate development occurs in preeclampsia, leading to high blood pressure, proteinuria, and edema [5][6][7]. Therefore, it is important to study the trophoblast invasion mechanism under a hypoxic environment. The hypoxic environment is a major factor that increases trophoblast invasion [8,9]. Hypoxia-induced factor-1 alpha (HIF-1α) expression in the endometrium is clinically observed [10]. Oxygen level affects the degree of degradation of HIF-1α, which promotes gene expression by binding to the hypoxia response element (HRE) [11,12]. It also affects the matrix metalloproteinase (MMP) expression, which is involved in cell migration via the degradation of Micromachines 2022, 13, 2216 2 of 9 the extracellular matrix [13]. MMP is an essential factor that decomposes substrates for each type and regulates cell mobility [14]. It is known that the expression of MMP-2 and MMP-9 in the MMP subfamily is responsible for the mobility of trophoblasts [15], and it has been confirmed in many studies that a hypoxic environment increases the expression of MMP-2 and MMP-9 in trophoblasts [16]. In trophoblast invasion studies, the traditional 2D culture has limitations in measuring the cell migration distance and direction with co-culture [17]. Recent research is being conducted using microfluidic chips for cell research to solve this problem [18][19][20]. A microfluidic chip can separate a cell culture area through various hydrogels. It is possible to analyze interaction with the substance diffusion across the hydrogel structure, which has a porous structure. It can be set to analyze cell migration distance and direction. In addition, Polydimethylsiloxane (PDMS) used in microfluidic chips is a famous material with high gas permeability, and there is an advantage that oxygen can be controlled for cellular respiration. This experiment was designed to experiment on the analysis of changes in cell mobility through oxygen control. Before proceeding with the experiment on the chip, the change in mRNA expression in cells through oxygen regulation was analyzed. The chip was designed to check cell movement due to the difference in oxygen concentration. The chip was fabricated via gel patterning through the height difference ( Figure 1). Generally, microfluidic chips are manufactured using a natural hydrogel with low stiffness [21]. In our chip fabrication, the microfluidic chip was manufactured through gelatin methacrylate (GelMA), paying attention to the fact that the high stiffness [22][23][24], like that of the tissue, also affects cell movement. After manufacturing the chip, trophoblast migration was observed through cell culture for each channel. At this time, the oxygen concentration was controlled through the hypoxic chamber. Micromachines 2022, 13, x 2 of 9 gene expression by binding to the hypoxia response element (HRE) [11,12]. It also affects the matrix metalloproteinase (MMP) expression, which is involved in cell migration via the degradation of the extracellular matrix [13]. MMP is an essential factor that decomposes substrates for each type and regulates cell mobility [14]. It is known that the expression of MMP-2 and MMP-9 in the MMP subfamily is responsible for the mobility of trophoblasts [15], and it has been confirmed in many studies that a hypoxic environment increases the expression of MMP-2 and MMP-9 in trophoblasts [16]. In trophoblast invasion studies, the traditional 2D culture has limitations in measuring the cell migration distance and direction with co-culture [17]. Recent research is being conducted using microfluidic chips for cell research to solve this problem [18][19][20]. A microfluidic chip can separate a cell culture area through various hydrogels. It is possible to analyze interaction with the substance diffusion across the hydrogel structure, which has a porous structure. It can be set to analyze cell migration distance and direction. In addition, Polydimethylsiloxane (PDMS) used in microfluidic chips is a famous material with high gas permeability, and there is an advantage that oxygen can be controlled for cellular respiration. This experiment was designed to experiment on the analysis of changes in cell mobility through oxygen control. Before proceeding with the experiment on the chip, the change in mRNA expression in cells through oxygen regulation was analyzed. The chip was designed to check cell movement due to the difference in oxygen concentration. The chip was fabricated via gel patterning through the height difference ( Figure 1). Generally, microfluidic chips are manufactured using a natural hydrogel with low stiffness [21]. In our chip fabrication, the microfluidic chip was manufactured through gelatin methacrylate (GelMA), paying attention to the fact that the high stiffness [22][23][24], like that of the tissue, also affects cell movement. After manufacturing the chip, trophoblast migration was observed through cell culture for each channel. At this time, the oxygen concentration was controlled through the hypoxic chamber. Device Fabrication The chip was manufactured through a photolithography process using the SU8 photoresist series (Kayaku Advanced Materials, Westborough, MA, USA). To proceed with gel patterning, the heights of the chips were made to differ. SU8-2050 was used on a 4-inch silicon wafer with 50 µm height for the bottom layer mold. SU8-2150 was used with 400 µm for the top layer mold. Each photoresist was spread using a spin coater (POLOS,150i) in each height condition. Soft baking, UV exposure, hard baking, and development were performed according to manual requirements with each height condition. They were stored overnight in a 68 • C oven for stabilization. Polydimethylsiloxane (PDMS; Dow corning, Midland, MI, USA, SYLGARD 184) and curing agent were mixed in a ratio of 10:1 w/w and degassed via a vacuum pump for soft lithography using a chip mold. The PDMS mixture was poured into the chip mold and reacted for curing in 68 • C ovens for 4 h. Cured PDMS was trimmed with surgical mesh and bio punching (1 mm). For sterilization, the PDMS chip underwent UV exposure for a least 30 min on a clean bench. Gel Patterning and Cell Seeding The bottom layer of PDMS was attached to the slide glass after the plasma treatment (60 W, 1 min). Then, the top layer was attached to the bottom layer according to the align key after the same plasma treatment with the bottom layer. The GelMA solution was injected into the gel channel inlet to perform gel patterning. The UV exposure treatment was performed for 5 min at room temperature. To improve cell adhesion, 50 ng/mL of fibronectin (Sigma-Aldrich, Saint Louis, MO, USA. F1141) was injected into a cell channel and reacted at 37 • C for 2 h in a humid chamber, being washed twice with DPBS. The cultured cells in a T-75 flask were detached using 0.25% Trypsin-EDTA (Gibco, 15400), and the cell pellet was diluted at a concentration of 10 7 cells/mL. The diluted cell solution was injected into each cell channel. After incubation for 2 h in an incubator, the medium replacement was performed to confirm the adhesion of cells and remove nonattached cells. The chips were cultured in an incubator (21% O 2 ) and a hypoxia chamber (3% O 2 ). Media were replaced every two days. Molecular Diffusion Analysis The degree of diffusion through fluorescence was measured to check the possibility of cell interaction through the GelMA structure. Without cell seeding, 1.0 mg/mL of 40 kDa of FITC-dextran (Sigma-Aldrich, FD-40) was injected into the trophoblast channel. Fluorescence images of trophoblast and endothelial cell channels were obtained at 2 h intervals with a time-lip using a fluorescence microscope (Etaluma Inc., Carlsbad, CA, USA, LS620). ImageJ was used to calculate fluorescence intensity. Cell Tracker Staining Cell tracker staining was performed before the cell seeding process. The HUVECs were dyed using 5 ug/mL of CellTracker™ Red CMTPX Dye (Invitrogen, Carlsbad, CA, USA C34552), and HTR8/SVneo were dyed using 5 ug/mL of CellTracker™ Green CMFDA Dye (Invitrogen, C7025). The cell tracker was diluted using a fresh medium suitable for each cell. After replacing the media with a working solution, these were incubated at 37 • C for 30 min, then washed twice with DBPS. Dyed cells were used in the cell seeding process. Fluorescence images were captured using a fluorescence microscope (Etaluma Inc., LS620). Quantitative Real-Time PCR (qRT-PCR) Quantitative RT-PCR was performed to measure the MMP-2 and MMP-9 mRNA expression in trophoblasts according to oxygen level (21%, 8%, and 3% O 2 ). The HTR8/SVneo were seeded in a 6-well plate with 3 × 10 5 cells per well and incubated for 24 h in an incubator (37 • C and 5% CO 2 ). After that, they were replaced with a fresh medium and incubated for 24 h under different oxygen conditions. In the 21% O 2 condition, they were incubated in the incubator. In other oxygen conditions (8% O 2 and 3% O 2 ), they were incubated in a hypoxic chamber (BioSpherix, Parish, NY, USA. ProOx110). After that, RNA extraction was performed using the RNeasy Mini Kit (QIAGEN, Valencia, CA, Spain. 74104). This was carried out according to the manufacturer's manual. After RNA extraction, the concentration of RNA was measured using nanodrop. The RNA sample was diluted to 100 ng/uL using RNA-free water. Then, cDNA was prepared using Primescript™ RT Master Mix (Takara Bio Inc., RR036, Kusatsu, Japan,). RT-PCR was then performed in triplicate using TB Green ®® Premix Ex Taq™ II (Takara Bio Inc., RR820). The real-time PCR was carried out using an RT Gel Patterning in a Microfluidic Chip In order to perform co-culture in microfluidic chips, various methods are used to form a physical barrier for cell culture area separation. Cells are separated by inserting a porous membrane or hydrogel to form a physical barrier inside the chip. Among the hydrogel methods, gel patterning uses the hydrophilicity and low height of the surface rather than injecting liquid pressure to pattern a physical barrier via capillary action. In the chip used in this experiment, the transfer process of the GelMA solution without external pressure on the hydrophilic surface through plasma treatment is shown (Figure 2). Bottom layer to top layer bonding enables surfaces to bond through oxygen plasma. PDMS undergoes surface modification with hydrophilicity due to plasma treatment, and the formation of such a hydrophilic surface enables capillary action by a small height. The overlapping part of the bottom and top layers forms the height difference with bottom layer. It can be observed that the solution only moves in the non-overlapping bottom channel. The gel patterning method using the capillary phenomenon can reduce the manufacturing difficulty of handling differences in chip manufacturing through solution injection if hydrophilic modification is achieved through appropriate surface treatment. Cell Separation and Molecular Diffusion of GelMA Structure In a co-culture using different cell lines in microfluidic chips, independent channels are required to be separated using physical barriers. In this experiment, cell attachment to the coating was enabled using suction. GelMA has high physiological properties and is easy to handle without gel collapse. The compressive modulus of GelMA gel from the solution is about 20~30 kPa [24]. The separability between channels was confirmed using a fluorescence microscope (Figure 3). Fluorescent signals in the GelMA structure were not identified. Hence, no cells entered the gel structure during the cell seeding process. Physical barriers using hydrogels have the advantage of being able to diffuse. Diffusion allows for cell interaction by transferring substances secreted by cells. In order to check the degree of diffusion in the GelMA structure used in this experiment, we used fluorescent molecules. As a fluorescent substance, the experiment was conducted using FITC-dextran, often used in diffusion experiments. It was observed that FITC-dextran with a size of 40 kDa was injected into the trophoblast channel and then diffused over time to the endothelial channel. (Figure 4A). After that, as a result of analyzing the fluorescence intensity through ImageJ, the fluorescence intensity degree in the endothelial channel was shown to increase over time ( Figure 4B). It could be used without any problems in the transfer of substances for the interaction of cells between channels. Hypoxia Promote MMP-2 and MMP-9 mRNA Expression in HTR8/SVneo It is known that a hypoxic environment is crucial in regulating cell migration in trophoblast invasion. MMP-2 and MMP-9 are the most distinctly expressed proteins in the endometrium where trophoblast invasion occurs. The expression of MMP-2 and MMP-9 is regulated by oxygen level. MMP is a family of proteins involved in the degradation of the extracellular matrix, and MMP-2 and MMP-9 are classified in gelatinase, which can degrade gelatin and collagen. Before cell culture in a microfluidic chip, a pre-experiment was performed on a 6well plate to analyze the regulation of gene expression according to oxygen level. As a result of experimenting with an oxygen gradient using a hypoxic chamber with an adjustable oxygen level, as the oxygen level decreased, the mRNA expression of MMP-2 and MMP-9 was affected by the oxygen levels ( Figure 5). The expression of MMP-9 mRNA Hypoxia Promote MMP-2 and MMP-9 mRNA Expression in HTR8/SVneo It is known that a hypoxic environment is crucial in regulating cell migration in trophoblast invasion. MMP-2 and MMP-9 are the most distinctly expressed proteins in the endometrium where trophoblast invasion occurs. The expression of MMP-2 and MMP-9 is regulated by oxygen level. MMP is a family of proteins involved in the degradation of the extracellular matrix, and MMP-2 and MMP-9 are classified in gelatinase, which can degrade gelatin and collagen. Before cell culture in a microfluidic chip, a pre-experiment was performed on a 6-well plate to analyze the regulation of gene expression according to oxygen level. As a result of experimenting with an oxygen gradient using a hypoxic chamber with an adjustable oxygen level, as the oxygen level decreased, the mRNA expression of MMP-2 and MMP-9 was affected by the oxygen levels ( Figure 5). The expression of MMP-9 mRNA showed a higher trend of increase in the hypoxic environment compared to other oxygen concentration conditions. showed a higher trend of increase in the hypoxic environment compared to other oxygen concentration conditions. Comparison of Trophoblast Cell Migration with Different Oxygen Levels Chip cultivation was performed in a hypoxia chamber to show cell migration at different oxygen levels. The chip design used a top layer with an interval of 800 μm. Cell seeding was carried out for each channel, washing non-adherent cells by washing culture media. The first media washing timing was set for the day 0 condition. Chip culture was used to perform static culture using the yellow tip for a reservoir. Bright-field images were acquired while changing the medium every 48 h. When comparing the normoxic (21% O2) and hypoxic (3% O2) environments, and a comparison of cell migration at day 6 and 8 showed that hypoxia promoted cell migration in the GelMA structure ( Figure 6). This result is related to increased MMP-2 and MMP-9 gene expression according to oxygen concentration. The structure of GelMA shared the same amino acid sequence in collagen. The increase in the gelatinases of MMP-2 and MMP-9 also affected gelatin degradation for trophoblast invasion. Comparison of Trophoblast Cell Migration with Different Oxygen Levels Chip cultivation was performed in a hypoxia chamber to show cell migration at different oxygen levels. The chip design used a top layer with an interval of 800 µm. Cell seeding was carried out for each channel, washing non-adherent cells by washing culture media. The first media washing timing was set for the day 0 condition. Chip culture was used to perform static culture using the yellow tip for a reservoir. Bright-field images were acquired while changing the medium every 48 h. When comparing the normoxic (21% O 2 ) and hypoxic (3% O 2 ) environments, and a comparison of cell migration at day 6 and 8 showed that hypoxia promoted cell migration in the GelMA structure ( Figure 6). This result is related to increased MMP-2 and MMP-9 gene expression according to oxygen concentration. The structure of GelMA shared the same amino acid sequence in collagen. The increase in the gelatinases of MMP-2 and MMP-9 also affected gelatin degradation for trophoblast invasion. showed a higher trend of increase in the hypoxic environment compared to other oxygen concentration conditions. Comparison of Trophoblast Cell Migration with Different Oxygen Levels Chip cultivation was performed in a hypoxia chamber to show cell migration at different oxygen levels. The chip design used a top layer with an interval of 800 μm. Cell seeding was carried out for each channel, washing non-adherent cells by washing culture media. The first media washing timing was set for the day 0 condition. Chip culture was used to perform static culture using the yellow tip for a reservoir. Bright-field images were acquired while changing the medium every 48 h. When comparing the normoxic (21% O2) and hypoxic (3% O2) environments, and a comparison of cell migration at day 6 and 8 showed that hypoxia promoted cell migration in the GelMA structure ( Figure 6). This result is related to increased MMP-2 and MMP-9 gene expression according to oxygen concentration. The structure of GelMA shared the same amino acid sequence in collagen. The increase in the gelatinases of MMP-2 and MMP-9 also affected gelatin degradation for trophoblast invasion. Conclusions In this study, we analyzed the migration of trophoblast cells through a gel patterning chip. In addition, changes in mobility were analyzed by mimicking the oxygen environment of the endometrium in a hypoxia chamber. In the cell migration evaluation, a 3D structure was fabricated using GelMA, which had a higher stiffness than the natural hydrogel, thereby limiting indiscriminate cell movement due to the low stiffness of the natural hydrogel. This differs because it aims to only migrate cells affected by mobility in terms of mimicry. The mRNA expression of MMP-2 and MMP-9, migration-related proteins through oxygen level control, was confirmed, and the expression level increased in a hypoxic environment. Subsequently, the incubation on the chip showed cell migration was regulated by oxygen concentration. According to the results of this experiment, the existence of a hypoxic environment asserted in cell migration in trophoblast invasion means that it is a factor that enhances trophoblast cell invasion. Conclusions In this study, we analyzed the migration of trophoblast cells through a gel patterning chip. In addition, changes in mobility were analyzed by mimicking the oxygen environment of the endometrium in a hypoxia chamber. In the cell migration evaluation, a 3D structure was fabricated using GelMA, which had a higher stiffness than the natural hydrogel, thereby limiting indiscriminate cell movement due to the low stiffness of the natural hydrogel. This differs because it aims to only migrate cells affected by mobility in terms of mimicry. The mRNA expression of MMP-2 and MMP-9, migration-related proteins through oxygen level control, was confirmed, and the expression level increased in a hypoxic environment. Subsequently, the incubation on the chip showed cell migration was regulated by oxygen concentration. According to the results of this experiment, the existence of a hypoxic environment asserted in cell migration in trophoblast invasion means that it is a factor that enhances trophoblast cell invasion.
2022-12-16T16:10:54.666Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "6ebd5628a50b9ca8ccdcc8fd6a481906e2ce96fe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/13/12/2216/pdf?version=1671006166", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac5294970dc4801c9b342aadad08f268ada9cf4c", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119340354
pes2o/s2orc
v3-fos-license
Artin groups and Yokonuma-Hecke algebras We attach to every Coxeter system (W,S) an extension C_W of the corresponding Iwahori-Hecke algebra. We construct a 1-parameter family of (generically surjective) morphisms from the group algebra of the corresponding Artin group onto C_W. When W is finite, we prove that this algebra is a free module of finite rank which is generically semisimple. When W is the Weyl group of a Chevalley group, C_W naturally maps to the associated Yokonuma-Hecke algebra. When W = S_n this algebra can be identified with a diagram algebra called the algebra of `braids and ties'. The image of the usual braid group in this case is investigated. Finally, we generalize our construction to finite complex reflection groups, thus extending the Broue-Malle-Rouquier construction of a generalized Hecke algebra attached to these groups. by a presentation made of 'braid relations', S | sts . . . mst = tst . . . mst ∀s, t ∈ S , a monoid B + of positive braids defined by the same presentation, and an algebra, called the Iwahori-Hecke algebra. This algebra H W is defined over a ring k containing elements u s , s ∈ S subject to the condition u s = u t if s, t both lie in the same conjugacy class, as the quotient of the monoid algebra kB + by the relations (s − 1)(s + u s ) = 0 for s ∈ S. It is a deformation of the group algebra of W , obtained by the specialization at u s = 1. When W is the Weyl group of some reductive group, H W admits a natural interpretation as a convolution algebra. The specialization at u s = −1 of H W admits a natural central extension which is also a quotient of kB, recently defined in [33]. In this paper we define another natural object, a k-algebra C W which is an extension of H W , and admits a 1-parameter family of morphisms B → C W . This algebra admits generators g s , e s , s ∈ S and is defined by generators and relations in section 3.1. We prove (see theorem 3.4) that it is a free k-module. When W is finite, we show that C W has rank |W |.Bell(W ), where Bell(W ) is a natural generalization of the Bell number Bell n of partitions of a set of n elements, namely the number of reflection subgroups of W . Precisely, in the general case a basis of C W is naturally indexed by couples (w, W 0 ) for w ∈ W and W 0 a finitely-generated reflection subgroup of W . The original motivation for this algebra comes from an analysis of the so-called Yokonuma-Hecke algebra associated to a Chevalley group G and its unipotent radical U, namely the Hecke convolution ring H(G, U), defined by Yokonuma in [44]. Assume W is the Weyl group of G, with generating set S. Part of the natural generators of this algebra are directly connected to the structure of the torus, while the other ones are in 1-1 correspondence with S and satisfy braid relations, together with a quadratic relation also involving elements of the torus. In [21], using a Fourier transform construction, J. Juyumaya introduced other natural 'braid' generators g s , s ∈ S, for which the quadratic relation now involves some idempotent e s (in which is 'hidden' a linear combinations of elements of the torus). Therefore, there is a natural subalgebra generated by the g s , e s , and a natural question is to find a presentation for this subalgebra, at least when the field of definition of G is generic enough. The algebra C W that we introduce provides an answer to that question. More precisely, a better answer is a natural quotient C R W of C W where reflection subgroups, in natural 1-1 correspondence with root subsystems, are identified if they have the same closure (see section 3.4). Although one is, at least since Tits's classical article [43], somewhat accustomed to such a phenomenon, it remains surprising that once again such an object arising from reductive groups admits a natural generalization to arbitrary Coxeter groups. This algebra C W can be viewed as a deformation of the semidirect product C W (1) of the group algebra of W with a commutative algebra spanned by the collection of finitely generated reflection subgroups of W . We show in theorem 3.10 that, when W is finite and under obvious conditions on the characteristic, this algebra C W (1) is semisimple, and therefore C W is generically semisimple. For W = S n this generalizes and provides a more direct proof of a result of [3]. Actually, we show that in the case W = S n and in characteristic 0, the algebra C W (1) is split semisimple. The question about a similar statement for other Weyl groups raises new problems on the normalizers of reflection and parabolic subgroups in finite Weyl groups (see section 3.7). In section 4 we introduce a family of morphisms Ψ λ : kB → C W (u) and we exhibit an unexpected connection between the quotient of the group algebra of the braid group appearing inside the Yokonuma-Hecke algebra of type A and (a specialization of) the one connected with the Links-Gould polynomial invariant of knots and links. We are then able to deduce from Ishii's work on the Links-Gould invariant a new relation inside the Yokonoma-Hecke algebra. Amusingly enough, we notice that Ishii's work and Juyumaya's work on these previously unrelated topics appeared following each other in the same issue of the same journal (see [22,18]). A natural question is whether the natural map B → C W (u) is injective. Since there is a natural (surjective) map C W (u) → H W (u), this would be the case if the induced map B → H W (u) was itself injective. Right know, this is an open question, settled (positively) only in rank 2, by work of Squier [39], and an alternative proof can be found in [27]. Our question of whether B → C W (u) is injective therefore may or may not be a consequence of the solution of this one. A possibly easier question is whether the (restriction to B of the) maps Ψ λ are injective for generic λ. We show in section 4.4 that a simpleminded application of the existing methods does not suffice to conclude on this point. They however incite to define and look at a new monoid representation B + → C W with positive coefficients. In the last section, we show that the natural quotient C p W of C W , where reflection subgroups are identified if they have the same parabolic closure, can be generalized to the setting where W is a finite complex reflection group, in such a way that C p W is a natural extension of the generalized Hecke algebra H W associated to W by Broué, Malle and Rouquier in [6]. The main conjecture on H W , that H W is a free module of finite rank, is naturally extended to an a priori stronger conjecture on C p W , that we prove to be true for a couple of cases. In particular we prove this conjecture for W the complex reflection group of monomial n × n matrices with coefficients d-th roots of 1, which provides a natural extension of the so-called Ariki-Koike algebra. As a conclusion, we wonder whether other classical objects attached to Iwahori-Hecke algebras, like Kazhdan-Lusztig bases and Soergel bimodules, can be naturally extended to this setting. In particular it would be interesting to construct an extension of Lusztig's isomorphism of [28] to C W . We also consider very likely that the whole machinery of Cherednik algebras, including the so-called KZ functor, can be generalized in a natural way to our 'extended' setting. We leave this to future work. Acknowledgements. I thank R. Abdellatif, S. Bouc, C. Cornut, T. Gobet, K. Sorlin, R. Stancu and especially F. Digne and J.-Y. Hée for discussions on root systems and Coxeter groups. I thank A. Esterle for a careful reading of a first draft. Preliminaries 2.1. Yokonuma-Hecke algebra. Following Yokonuma's original paper [44], we use Chevalley's notation as in [9]. Let G be the Chevalley group associated to a semi-simple complex Lie algebra g and to a finite field K = F q and H, W, U ⊂ G as in [9]. In modern terms, G is a split simple Lie group of adjoint type over F q , H is a fixed maximal torus, U the unipotent radical of a fixed Borel subgroup containing H, and W is the normalizer of H in G. In general, the correspondence with modern notations is explained in [8,41]. To each root α of g we let ϕ α : SL 2 (K) → G denote the associated morphism, and Choosing a system α 1 , . . . , α l of simple roots, we let ω i = ω αi . There is a short exact sequence 1 → H → W → W → 1, where W is the corresponding Weyl group. Each ω α is mapped in W to the reflection s α associated to α. The Weyl group admits a presentation as Coxeter system (W, S) with S = {s 1 , . . . , s l } in 1-1 correspondence with the set of simple roots under s i = s αi ↔ α i . The subgroup H is generated by the h α,t . For short, we let h i,t = h αi,t . In [9], Chevalley denotes h α the corootα associated to α. In order to facilitate cross-references between [9] and [5] we will use both notations : h α =α. The maximal torus H is described in [9] as the image of Hom(L, K × ), where L is the root lattice, under the map χ → h(χ) where h(χ) is an automorphism of the associated complex Lie algebra g acting trivially on the Cartan subalgebra and by h(χ)X r = χ(r)X r on the generator associated to the root r. With these notations, h α,t = h(χ α,t ) where χ α,t (r) = t r(hα) = t r(α) . In [44], théorème 3, T. Yokonuma proves that the Hecke ring H(G, U) over Z admits a presentation by generators a(h), h ∈ H, a 1 , . . . , a l and relations The following proposition is crucial for us. Parts (1) and (2) are standard, parts (3) and (4) appear to be new, at least in the general case. The maximal torus H can be identified with (K × ) l through the identification with Hom(L, Choosing a generator ζ of K × , and therefore an isomorphism where t j = ζ mj , m j ∈ Z/(q − 1)Z, and therefore with the l-tuple (α i (m 1β1 + · · · + m kβk )) 1≤i≤l ∈ (Z/(q−1)Z) l . Let us now assume that β 1 , . . . , β k forms a basis of a root subsystem. We consider the map Φ : (Z/(q − 1)Z) k → (Z/(q − 1)Z) l given by (m 1 , . . . , m k ) → [α i (m 1β1 + · · · + m kβk )] 1≤i≤l . It is a Z-module homomorphism, with kernel the set of m 1 , . . . , m k such that m 1β1 + · · · + m kβk lies in the kernel of all α i 's modulo q − 1. Thereforeẽ β1 . . .ẽ β k is mapped to Let F denote the sub-lattice of the co-root lattice spanned byβ 1 , . . . ,β k , and C the Cartan matrix of the root system. The values obtained as v ∈ Im(Φ) are exactly the image of F under C modulo q − 1, and Ker Φ depends only on q − 1, F and C. Let r be a prime dividing q − 1 and not dividing det(C). We let Φ r : F k r → F l r denote the reduction of Φ modulo r. Then, under the map Since C is invertible modulo r, the image ImΦ r of the lattice F mod rL under C determines F mod r. Since there is a finite number of possible lattices F , there exists r 0 such that, for all prime r ≥ r 0 , the knowledge of F mod rL determines F . Let us choose such a prime number. By the Dirichlet prime number theorem there exists a prime p = q such that p ≡ 1 mod r, that is r|q − 1. Therefore, the subalgebra generated by theẽ α is 'generically' freely spanned by a family indexed by the collection of all closed symmetric subsystems of (the dual of) our original subsystem. Recall that there exists reduced root systems with proper closed symmetric subsystems of the same rank, for instance the long roots in type G 2 form a subsystem of type A 2 with this property. 2.2. Juyumaya's generators. In [21], Juyumaya introduced new generators L i 's of H(G, U ) in replacement of the a i 's, keeping the a(h) as they are. Choosing a non trivial additive character ψ of (K, +), and using some kind of Fourier transform, he defines for every root α the element ψ α = r∈K × ψ(r)h α,r . Then, letting L i = q −1 (ẽ αi + a i ψ αi ) he shows, in collaboration with S. Kannan ( [20], theorem 2) that H(G, U ) admits a presentation with generators L 1 , . . . , L l , a(h), h ∈ H and relations Then, letting u = q −1 , e α = (q − 1) −1ẽ α , e i = e αi and g i = −L i , this presentation becomes the following one : Yokonuma-Hecke algebras of type A. A particularly studied variation of the above construction mimics the situation above for the (non-semisimple !) reductive group GL n (K) with K a 'field of order d + 1'. Let us fix a commutative ring k (with 1), u ∈ k, d ∈ Z >0 . We assume that d and u are invertible in k. The literature on the subject, see e.g. [11], denotes Y d,n (u) and calls the Yokonuma-Hecke algebra of type A the k-algebra generated with generators g 1 , . . . , g n−1 , t 1 , . . . , t n and relations (1) whenever i = j and 1 ≤ i, j ≤ n. The elements g i are invertible, with inverse g −1 It can be easily proved that the following relations hold : (5) e ij = e ji for all i = j (6) e i,j e k,l = e k,l e i,j for all i = j, k = l (7) g i e j,k = e si(j),si (k) g i for all i, j, k with k = j (8) e 2 ij = e ij for all i = j. The subalgebra of Y d,n (u) generated by the g i 's and e i 's has been investigated in the past years. J. Juyumaya and F. Aicardi have introduced a diagram algebra E n (u) called the algebra of braids and ties, such that this subalgebra is an homomorphic image of E n (u), this morphism being generically injective (actually already for d ≥ n, see [17]). A Markov trace was subsequently constructed on this algebra of braids and ties, see [1]. This algebra is efficiently studied in [36], where S. Ryom-Hansen provides a faithful module for it, and uses it to show that the algebra has dimension n!Bell n , where Bell n is the n-th Bell number. Theorem 3.4 below generalizes this last statement. Now we notice that, in [12], M. Chlouveraki and L. Poulain d'Andecy introduce other generators They notice that these generators also satisfy the braid relations. We will give a general explanation for this phenomenon in section 4.1. 3. Construction of the algebra C W 3.1. General construction. Here k is a commutative ring (with 1). Let W denote a Coxeter group, with generating set S. We let R ⊃ S denote its set of reflections. If W is finite this set can be defined as the geometric reflections of W in its natural representation, and in the general case this is the set of conjugates of S. We denote P f (R) the set of all finite subsets of R, and by P(R) the set of all its subsets. We recall that a reflection subgroup of W is a subgroup generated by a subset of R. We also recall that a Coxeter group W given by the Coxeter system (W, S) is finitely generated as a group if and only if S is finite. Indeed, if W = x 1 , . . . , x n for some x 1 , . . . , x n , we can write the x i 's as a product of a finite number of elements of S, hence W is equal to its standard parabolic subgroup (W X , X) for some finite X ⊂ S. Since W X ∩ S = X ([5], IV §1 No. 8, corollaire 2) this proves that S = X is finite. Finally, we recall from Dyer's thesis the following basic fact, extending a well-known property of finite Coxeter groups to general ones : Proposition 3.1. (Dyer, PhD thesis, theorem 1.8; see also [16] corollary 3.11 (ii) and Deodhar [15]) Let W 0 be a reflection subgroup of W . Then W 0 is a Coxeter group (W 0 , S 0 ) with S 0 ⊂ R and W 0 ∩ R = R 0 , with R 0 the set of reflections of (W 0 , S 0 ). Moreover, if W 0 is generated by J ⊂ R, then every element of R 0 is a conjugate inside W 0 of an element of J. For every s ∈ S, we choose u s ∈ k such that s 1 ∼ s 2 ⇒ u s1 = u s2 , where a ∼ b means that a, b ∈ S lie in the same conjugacy class. We set u = (u s ) s∈S and define C W (u) to be the associative unital k-algebra defined by generators g s , s ∈ S, e t , t ∈ R, and relations (1) g s g t g s . . . (2) e 2 t = e t for all t ∈ R (3) e t1 e t2 = e t2 e t1 for all t 1 , t 2 ∈ R (4) e t e t1 = e t e tt1t −1 for all t, t 1 , t 2 ∈ R (5) g s e t = e sts g s for all s ∈ S, t ∈ R (6) g 2 s = 1 + (u s − 1)e s (1 + g s ) for all s ∈ S. Note that C W (u) is actually finitely generated as soon as S is finite, by the following elementary proposition. Proposition 3.2. The algebra C W (u) is generated by the g s , e s for s ∈ S. Proof. Let A be the subalgebra of C W (u) generated by the g s , e s for s ∈ S. It is sufficient to show that e t ∈ A for all t ∈ R. By definition such a t can be written as w −1 s 0 w for some s 0 ∈ S and w ∈ W . Writing w = s 1 . . . s r with s 1 , . . . , s r ∈ S, we need to prove e srsr−1...s1s0s1...sr ∈ A for all s 0 , s 1 , . . . , s r ∈ S. By induction on r this results from the relation g sr e sr−1...s1s0s1...sr−1 g −1 sr = e srsr−1...s1s0s1...sr−1sr . For w ∈ W , we let g w = g s1 . . . g sr if s 1 . . . s r is a reduced expression of w. Since the g s 's satisfy the braid relations this does not depend on the chosen expression by Iwahori-Matsumoto's theorem. For J ∈ P f (R), we set e J = t∈J e t . In order to study these elements we define an equivalence relation J ∼ K on P f (R) as the equivalence relation generated by the couples (J, K) ∈ P f (R) × P f (R) such that J contains some {s, t} and K = J ∪ {sts}. By definition this is the smallest equivalence relation containing such couples. This equivalence relation can be restated as follows. Therefore, the set of equivalence classes is in natural bijection with the collection W of finitely generated reflection subgroups of W . In particular, when W is finite, the number of equivalence classes can be identified with the number of reflection subgroups of W . Notice that, when W is the Weyl group of some root system R, then reflection subgroups are in 1-1 correspondence with root subsystems (in the sense of a subset of R satisfying the axioms of root systems, as in [5]). By relations (2) and (4) above, we have e s e t = e s e t e t = e s e sts e t and thus J ∼ K implies e J = e K . Therefore, we can define e W0 for every finitely generated reflection subgroup W 0 of W , by letting e W0 = e J for any J ∈ P f (R) with J = W 0 . Notice that, when W is finite, there is a distinguished representative of the class of J ∈ P f (R) = P(R), namely J := J ∩ R. In the general case, one can make a different choice, taking for J Dyer's canonical set of Coxeter generators for J (since such set can be infinite only if the Coxeter group is not finitely generated). In the sequel, we will denote J ∈ P f (R) the chosen representative of the class of J ∈ P f (R). Description as a module. Theorem 3.4. The algebra C W (u) is a free k-module with basis the eJ g w , for w ∈ W and J ∈ P f (R). In particular, if W is finite then it has for rank the order |W | of W multiplied by the number |W| of reflection subgroups of W . We shall see in section 3.6 that |W| may be called the Bell number of type W . Proof. We denote by ℓ the classical length function on the Coxeter group W . To each J ∈ P f (R) we associate e J = t∈J e t . Let us consider J ∈ P f (R), w ∈ W and s ∈ S. Then g s e J g w = e sJs −1 g s g w . If ℓ(sw) = ℓ(w) + 1 we have g s g w = g sw and we get g s .e J g w = e sJs −1 g sw . Otherwise w can be written w = sw ′ with ℓ(w ′ ) = ℓ(w) − 1. Then g s g w = g 2 It follows that g s .e J g w = e sJs g w ′ + (u s − 1)e sJs e s g w ′ + (u s − 1)e sJs e s g w = e sJs g sw + (u s − 1)e sJs∪{s} g sw + (u s − 1)e sJs∪{s} g w . Finally, in all cases we have e s .(e J g w ) = e J∪{s} g w . Since C W (u) is generated as a unital algebra by the g s and e s , s ∈ S this proves that the set of the e J g w for J ∈ P f (R), w ∈ W , and therefore of the eJ g w for J ∈ P f (R), w ∈ W , is a spanning set for C W (u). We notice that (e J g w )e s = e J e wsw −1 g w = e J∪{wsw −1 } g w and, if ℓ(ws) = ℓ(w) + 1, then (e J g w )g s = e J g ws . If ℓ(ws) = ℓ(w) − 1, then e J g w g s = e J g ws g 2 s = e J g ws (1 + (u s − 1)e s (1 + g s )) = e J g ws + (u s − 1)e J g ws e s + (u s − 1)e J g ws e s g s = e J g ws + (u s − 1)e J e ws.s.(ws) −1 g ws + (u s − 1)e J e ws.s.(ws) −1 g ws g s = e J g ws If ℓ(sw) = ℓ(w) − 1 and ℓ(wt) = ℓ(w) + 1, then we have ℓ(swt) = ℓ(w) for the same reason as in the preceding case. and Therefore, these terms are equal as soon as we have and Since sw = wt implies swt = w, s = wtw −1 , swtw −1 s −1 = s and u s = u t , these two expressions are equal. We thus proved that the G s , E s commute with the Now, for s, t ∈ S, we denote m st the order of st in W . We let Then, We let A denote its image. Since the e J g w span C W (u) and their image maps v ∅,1 to v J ,w we get that this homomorphism is injective, and that its image surjects onto the free k-module V under the map a → a.v ∅,1 . This proves the claim. 3. 3. An extension of the Iwahori-Hecke algebra. The algebra C W (u) is an extension of the Iwahori-Hecke algebra H W (u). We let T s , s ∈ S denote the natural generators of H W (u), and T w = T s1 . . . T sr when w = s 1 . . . s r is a reduced expression of w ∈ W . (1) The map g s → T s , e s → 1 induces a surjective k-algebra morphism p : For w ∈ W , it maps g w to T w and each e J to 1. Its kernel is the two-sided ideal generated by the e s − 1, s ∈ S. Proof. One gets that the map g s → T s , e s → 1 induces a morphism of (unital) k-algebras p : , by checking that the defining relations of C W (u) hold inside H W (u). This is immediate for relations (1)-(5), and (6) is mapped to the defining relation T 2 . This morphism is surjective because the T s 's generate H W (u) as a unital k-algebra. By definition of g w and T w it is clear that p(g w ) = T w for all w ∈ W , and similarly that p(e J ) = 1 for all J's. By theorem 3.4 we know that C W (u) is spanned by the g w e J , with w ∈ W and J ∈ P f (R). An element x ∈ Ker p can be written w,J a w,J g w e J with a w,J ∈ k almost all zero, such that 0 = w,J a w,J T w = w ( J a w,J ) T w . Let us fix w ∈ W , and let b J = a w,J . We have b J = 0 since the T w 's form a basis of H W (u), so it is sufficient to prove that every element in x ∈ p of the form J b J e J belongs to the ideal I generated by the e s − 1, s ∈ S. This amounts to saying that e J −1 ∈ I for all J. Letting r(W 0 ) denotes the minimal number of reflections needed for generating W 0 , we prove this by induction on r( J ). The case r( J ) = 0 is obvious, the case r( J ) = 1 is a consequence of g w (e s −1)g −1 w = e wsw −1 −1 for all w ∈ W and s ∈ S. Now, if r( J ) > 1, there exists t ∈ J such that r( K ) < r( J ), where K = J \ {t}. Again because g w (e J − 1)g −1 w = e wJw −1 − 1, we can assume s ∈ S. Then, e J = e K e s and e J − 1 = e K (e s − 1) + e K − 1 ∈ e K − 1 + I, so we get e J − 1 ∈ I by the induction assumption. This completes the proof of (1). In order to prove (2), we first note that e W is central and idempotent. We prove that T s → g s e W , 1 → e W induces an algebra morphism. Since e W is central, the braid relations T s T t T s · · · = T t T s T t . . . are mapped to e mst W g s g t g s · · · = e mst W g t g s g t . . . and this holds true inside C W (u). The quadratic relation We prove that this holds true, because the relation g 2 s = 1+(u s −1)e s (1+g s ) implies g 2 s e s = e s +(u s −1)e s (1+g s ) = u s e s + (u s − 1)g s and therefore, since e s e W = e W , we get g 2 s e W = u s e W + (u s − 1)g s e W . Therefore there exists a k-algebra morphism q : H W (u) → C W (u), which maps T w to g w e W as is readily checked by induction on ℓ(w). We have p(q(T w )) = p(g w e W ) = T w , and this proves (2). 3.4. Meaningful quotients. We recall that W denotes the collection of finitely generated reflection subgroups of W , endowed with the conjugation action of W . If J ∈ P f (R), we let e J = e J = e J . The algebra C W (u) is spanned by elements e J g w for w ∈ W and J ∈ W. Let F be a W -set and p = W → F be a surjective map which is W -equivariant. Such a map can be seen as an equivalence relation on W compatible with the action of W . We also assume that Proposition 3.7. Let p : W ։ F be as above, and I p the ideal of C W (u) generated by the e J − e K for p(J) = p(K). The quotient algebra C F W (u) = C W (u)/I p is a free module, of rank |W |.|F | if W is finite. The algebra morphism p : p be the k-module spanned by the (e J − e K )g w for w ∈ W and p( J ) = p( K ). Since p( J ) = p( K ) implies p( J, s ) = p( K, s ) we know that e s I ′ p ⊂ I ′ p for all s ∈ S; since p is equivariant we have g w I ′ p g −1 w ⊂ I ′ p for all w ∈ W and therefore I ′ p e s ⊂ I ′ p . From this and the defining relation (6) of C W (u) we get I ′ p g s ⊂ I ′ p for all s ∈ S, and g s I ′ p = g s I ′ p g −1 s .g s ⊂ I ′ p . Therefore I ′ p is an ideal. Since I ′ p ⊂ I p we get I p = I ′ p hence I p is spanned by the (e J − e K )g w for w ∈ W and p( J ) = p( K ). The assertion on the structure as a module and the rank then follows from the previous theorem. The factorization assertion is clear from the definition of I p and proposition 3.6. Important examples of such p are the following ones : (1) F = F parab is the collection of parabolic subgroups, and the map p associates to G ∈ W the fixer of the fixed-point set {x ∈ R n ; ∀g ∈ G g.x = x} (2) If W is the Weyl group of a reduced root system R, then W can be identified with the collection of root subsystems of R. Then, one can take for F = F closed (R) the collection of closed symmetric subsystems, and for p the map which associate to a root subsystem its closure. The first example arises for arbitrary groups, and is the smaller of the two types, when both can be compared : there is a natural surjective map F closed (R) → F parab which is not bijective in general (e.g. see A 2 as the set of long roots inside G 2 ). The second one is the one which is the most relevant to the original Yokonuma-Hecke algebra H(G, U ), as C . Note that, when W has type A n , and R is the root system of type A n , then C W (u) = C R W (u) = C p W (u). Moreover, in general the morphism onto H W (u) factorizes as follows Lusztig's involution and Kazhdan-Lusztig bases. Our basic reference on Kazhdan-Lusztig bases is [37], although it deals only with the 1-parameter case, but the properties that we use here are easily generalized from this case. The general statements can also be found in [4] (see also [29] for an intermediate situation The following equalities are easily checked s e W0 H s . From this the following proposition readily follows. Proposition 3.8. There exists an involutive ring automorphism of for each w ∈ W and W 0 ∈ W. It induces similar automorphisms of C p W and C R W (when defined). It is compatible with the ring automorphism of s , s → s −1 for s ∈ S, and with Lusztig's involution of H W (as in [37,29]), that is the following diagram commutes, where the vertical maps are these involutive automorphisms and the horizontal ones are the natural maps. The question of whether the Kazhdan-Lusztig basis can be 'lifted' in a natural and essentially unique way is therefore an intriguing one, that we leave open for now. 3.6. Combinatorics and Bell numbers. In type A n−1 , reflections have the form (i, j), 1 ≤ i < j ≤ n, and therefore a subset of R can be identified with a graph on n vertices. If J ⊂ R, then J is the graph of the transitive closure of the graph given by J, and the set of all graphs of this form is the set of disjoint unions of complete graphs on {1, . . . , n}. This set is in natural 1-1 correspondence with partitions of the set {1, . . . , n}, and therefore has for cardinality the n-th Bell number Bell n : 1, 1, 2, 5, 15, 52, 203, 877, . . .. Because of this, we will call in general the Bell number of type W the number of reflection subgroups of W , and we will call W -partitions the elementsJ, J ⊂ R. In type D n , it can be interpreted as the number of symmetric partitions of {−n, . . . , n} \ {0} such that none of the subsets is of the form {j, −j}, see sequence A086365 in Sloane's Online Encyclopaedia of Integer Sequences. Here symmetric means that, for every part X of the partition, its opposite −X is a part of the partition. Indeed, the reflections have the form s ij or s ′ ij , where s ij .(z 1 , . . . , z i , . . . , z j , . . . , z n ) = (z 1 , . . . , z j , . . . , z i , . . . , z n ) s ′ ij .(z 1 , . . . , z i , . . . , z j , . . . , z n ) = (z 1 , . . . , −z j , . . . , z i , . . . , z n ); then, to a stable subset R 0 of R we associate the partition of {−n, . . . , n} \ {0} made of the equivalence classes under the relation i ∼ j for ij > 0 if s ij ∈ R 0 , for ij < 0 if s ′ ij ∈ R 0 . Conversely, we associate to a partition P the collection of reflections made of the s ij for i, j > 0 in the same part of P, and of the s ′ ij for i, j > 0 when −i, j belong to the same part of P. These two maps provide a bijective correspondence. An exponential generating function for this sequence is Among the exceptional groups, we computed the number of reflection subgroups by using elementary methods in the computer system GAP3 together with its CHEVIE package, except for the largest ones E 7 and E 8 , for which this was not sufficient. Therefore, we used the classification of their reflection subgroups provided in [14] in this case : the total number is then the sum of the number of conjugacy classes provided in the third columns of tables 4 and 5 of [14]. The result can be found in table 1. In order to find the dimension of C p (W ), we need to know the number of parabolic subgroups. These are in 1-1 correspondence with the elements of the lattice of the corresponding hyperplane arrangements, and with this interpretation they are described in [35]. We call parabolic Bell number of type W and denote Bell p (W ) this number. Finally, when R is (one of) the classical root systems attached to W , we call Bell number of type R and denote Bell R (W ) the number of closed root subsystems. If W is of simply laced (ADE) type, then Bell R (W ) = Bell(W ). For exceptional groups, both numbers are also listed in table 1. For the infinite series B n and D n , the first values are listed in table 2. The series Bell p (D n ) and Bell p (B n ) are investigated and presented as analogues of Bell numbers in [42]. J. East communicated to us that he too generalized Bell numbers to series B, D and I 2 (m) (unpublished). In his approach, the 'right analogues' are Bell R (B n ), Bell(D n ) and Bell(I 2 (m)), respectively, which correspond to the sequences A002872, A086365 and A088580 in Sloane's encyclopaedia of integer sequences. To the best of our knowledge, the sequence Bell(B n ) has not yet been investigated. 3.7. Specialization at u = 1 and semisimplicity. The algebra C W (1) is obviously a semidirect product kW ⋉ A, where A is the subalgebra generated by the idempotents e J . Let L be a join semilattice. That is, we have a finite partially ordered set L for which there exists a least upper bound x ∨ y for every two x, y ∈ L. Let M be the semigroup with elements e λ , λ ∈ L and product law e λ e µ = e λ∨µ . Such a semigroup is sometimes called a band. If L is acted upon by some group G in an order-preserving way (that is x ≤ y ⇒ g.x ≤ g.y for all x, y ∈ L and g ∈ G) then M is acted upon by G, so that we can form the algebra kM ⋊ kG. Up to exchanging meet and joint, the algebra kM is the Möbius algebra of [40], definition 3.9.1 (this reference was communicated to us by V. Reiner). We will need the following proposition, which is in part a G-equivariant version of [40], theorem 3.9.2. Here k L is the algebra of k-valued functions on L, that is the direct product of a collection indexed by the elements of L of copies of the k-algebra k. Proposition 3.9. Let M be the band associated to a finite join semilattice L. For every commutative ring k, the semigroup algebra kM is isomorphic to k L . If L is acted upon by some group G as above, then kM ⋊ kG ≃ k L ⋊ kG. If G is finite and k is a field whose characteristic does not divide |G|, then the algebra kM ⋊ kG is semisimple. If kG λ is split semisimple for all λ ∈ L, where G λ < G is the stabilizer of λ, then so is kM ⋊ kG. Proof. To each λ ∈ L we associate ϕ λ : L → k defined by ϕ λ (µ) = 1 if λ ≤ µ and ϕ λ (µ) = 0 otherwise. We define a k-linear map c : M → k L by e λ → ϕ λ . We prove that c is an algebra homomorphism. We have that ϕ λ1 ϕ λ2 maps µ ∈ L to 1 iff λ 1 ≤ µ and λ 2 ≤ µ, and to 0 otherwise ; ϕ λ1∨λ2 maps µ ∈ L to 1 iff λ 1 ∨ λ 2 ≤ µ, and to 0 otherwise. These two conditions being equivalent, this proves c(e λ1 e λ2 ) = c(e λ1 )c(e λ2 ), hence c is a k-algebra homomorphism. We now prove that c is injective. We assume λ∈L a λ ϕ λ = 0 for a collection of a λ ∈ k, and we want to prove that all a λ 's are zero. If not, let λ 0 be a minimal element (w.r.t. ≤) among the elements of L such that a λ = 0. Then 0 = λ∈L a λ ϕ λ (λ 0 ) = a λ0 provides a contradiction. Therefore, c is injective. We now prove that c is surjective. Let f λ ∈ k L being defined by f λ (µ) = δ λ,µ (Kronecker symbol). The f λ 's obviously form a basis of k L and we need to prove that they belong to the image of c, that is to the submodule V spanned by the ϕ λ 's. Let λ 0 ∈ L. We prove that f λ0 belongs to V by induction with respect to ≤. If λ 0 is minimal in L, then ϕ λ0 = f λ0 and this holds true. Now assume f λ ∈ V for all λ < λ 0 . Let g = f λ0 − ϕ λ0 . We have g(µ) = 0 unless µ < λ 0 . Therefore g is a linear combination of the f µ 's for µ < λ 0 hence g ∈ V and this implies f λ0 ∈ ϕ λ0 + V ⊂ V . By induction we conclude that c is surjective, and therefore is an isomorphism. Now assume that L is acted upon by G. Then kM and k L are both natural kG-modules : if g ∈ G, then g.e λ = e g.λ and, if f : L → k, then g.f : λ → f (g −1 .λ). For these actions, c is an isomorphism of kG-modules. Indeed, g.ϕ λ (µ) = ϕ λ (g −1 .µ) is 1 is λ ≤ g −1 .µ and 0 otherwise, while ϕ g.λ (µ) is 1 if g.λ ≤ µ and 0 otherwise. Since the action of G is order-preserving, the two conditions are equivalent and this proves the claim. Therefore c induces an isomorphism kM ⋊ kG ≃ k L ⋊ kG. When G is a finite and k is a field, we have kM ⋊ kG ≃ k L ⋊ kG ≃ X∈E k X ⋊ kG where E is the set of orbits of the action of G on L. Each X is a finite, transitive G-set, and therefore k X ⋊ kG ≃ M at X (kG 0 ), where M at X (R) denotes the |X| × |X| matrix ring over the ring R, and G 0 < G is the stabilizer of an element of X (see e.g. [7], proposition 3.4). Therefore kM ⋊ kG is isomorphic to a direct sum of matrix algebras over group algebras of finite groups. It is thus semisimple if and only if all these group algebras are semisimple. This is the case as soon as the characteristic of k does not divide |G|. Similarly, it is split semisimple if all these group algebras are split semisimple, and this concludes the proof of the proposition. We use this proposition to prove the following. Theorem 3.10. Let W be a finite Coxeter group. The algebra C W (1) is isomorphic to k W ⋊ kW . Moreover, if k is a field then the following holds. (1) If the characteristic of k does not divide the order of |W |, then C W (1) is semisimple. If k has characteristic 0, then the algebra C W (u) is generically semisimple, and C W (u) ≃ k W ⋊ kW for generic u, up to a finite extension of k. (2) If moreover the group algebra kN W (W 0 ) of the normalizer of W 0 inside W is split semisimple for every reflection subgroup W 0 of W , then C W (1) is split semisimple. Proof. We apply the above proposition with L the semilattice made of all the reflection subgroups W, with ≤ denoting the inclusion of reflection subgroups, and the action of W is by conjugation. This proves one part of (1), and the remaining part is a consequence of Tits' deformation theorem (see e.g. [23], §7.4) and of the fact that C W (u) is a free module of finite rank over k[u], by theorem (2) is the consequence of the proposition above together with the fact that the stabilizers of the action of W on W are exactly the normalizers of reflection subgroups. Part In particular, for W = S n , this has the following consequence. Corollary 3.11. If W = S n and k has characteristic not dividing n!, then C W (1) is split semisimple over k. Proof. From the theorem above, we need to prove that, for every reflection subgroup W 0 of S n , its normalizer N 0 has a split semisimple group algebra over k. Recall that a reflection subgroup W 0 of S n naturally corresponds to a partition P of {1, . . . , n}. The normalizer of W 0 is easily seen to be the subgroup of S n stabilizing the partition, and is therefore a direct product of wreath products of the form S m ≀ S d = (S m ) d ⋊ S d for md ≤ n. The group algebras of these groups are split semisimple as soon as they are semisimple (see [19], cor. 4.4.9). By Maschke's theorem this holds true as soon as the characteristic of k does not divides n!, and this proves the claim. We do not know the class of groups, for which the above corollary holds (in characteristic 0). When W is not a Weyl group, the field Q should of course be replaced by the field of definition K = tr(w); w ∈ W . Also, we might want to generalize this statement either to C W (1) or, more cautiously, to C p W (1) or some C R W (1). The most naive (and vague) question on Coxeter groups related to this is therefore the following one. Question 3.12. For which finite Coxeter groups W and which class of reflection subgroups G of W can we expect that the group algebra KN W (G) of the normalizer is split semisimple ? One may wonder whether this is actually true for an arbitrary reflection subgroup and the class of all reflection subgroups. A simple and easy-to-visualize counterexample is given by the following construction. Consider the normalizer of a 2-Sylow subgroup S ≃ (Z/2Z) 3 of the symmetry group W = H 3 of the icosahedron. It is a semi-direct product S ⋊ C 3 , and S is a reflection subgroup -generated by the reflections around three orthogonal golden rectangles, see figure 1, and the element of order 3 is a rotation whose axis goes through the two opposite faces painted in blue. Therefore this normalizer has (1-dimensional) representations that can be realized only over Q(ζ 3 ), while the group algebra of W splits only over Q( √ 5). Relaxing the first assumption, the next natural question is whether this is actually true for a Weyl group (that is, K = Q) and again the class of all reflection subgroups. A counter-example can be constructed in type E 7 , where there is a 2-reflection subgroup W 0 isomorphic to Z 7 2 , whose normalizer N 0 has for quotient N 0 /W 0 ≃ PSL 2 (F 7 ) ≃ SL 3 (F 2 ). From the character table of SL 3 (F 2 ) (that can be found e.g. in the ATLAS [13]) one gets that it admits (for example, 3-dimensional) irreducible characters whose values generate Q( √ −7), and therefore the irreducible characters N 0 are not all rationally-valued. Interestingly enough, the reflection subgroups appearing as counterexamples here (for H 3 and E 7 ) both arise from the decomposition of −1 ∈ W as a product of orthogonal reflections, established in [38]. For the interested reader, one can check that, in type E 7 , we have N 0 = SL 3 (F 2 ) ⋉ F 7 2 , and the action of SL 3 (F 2 ) on F 7 2 is the permutation representation over F 2 associated to a transitive action of SL 3 (F 2 ) on 7 elements. Up to automorphism, there is only one transitive action of SL 3 (F 2 ), and this is its natural action on the seven non-zero elements of F 3 2 . I thank R. Stancu for discussions on this last topic. The next natural question is whether, for all reflection groups, and the class of all parabolic subgroups, the algebra KN W (G) is split semisimple, which would imply that C p W (1) is split semisimple for k = K. This might be attacked through Howlett's general description of the normalizers of parabolic subgroups (see [25]). Note that the constructions above in type H 3 and E 7 are not parabolic since they have the same rank as the whole group. The above discussion on the normalizers motivates to our eyes that the most natural remaining questions on the splitting fields for our algebras are the following ones. Question 3.13. Let W be a finite Coxeter group. (1) Is C p W (1) split semisimple for k = K ? At least when K = Q ? (2) Is there a natural minimal splitting field for C W (1) ? Can one characterize it in terms of W ? Braid image In this section we study the image of the (generalized) braid group B inside the algebra C W (u). We let B + denote the positive braid monoid (or Artin monoid) associated to W . Braid morphisms. Proposition 4.1. For every collection (λ s ) s∈S ∈ (k \ {0}) S such that s ∼ t ⇒ λ s = λ t , there exists a morphism B + → C W (u) defined by s → g s + λ s g s e s for s ∈ S. When k is a field, it can be extended to a morphism Φ λ : kB → C W (u) if and only if ∀s ∈ S λ s = −1. Proof. Let s, t ∈ S, and m st denote the order of st ∈ W . We have g s (1 + λ s e s )g t (1 + λ t e t )g s (1 + λ s e s ) . . . = g s (λ s e s ) ε1 g t (λ t e t ) ε2 g s (λ s e s ) ε3 . . . = (λ s e s ) ε1 (λ sts e sts ) ε2 g s g t g s (λ s e s ) ε3 . . . = ( (λ s e s ) ε1 (λ sts e sts ) ε2 (λ ststs e ststs ) ε3 . . . = g t (1 + λ t e t )g s (1 + λ s e s )g t (1 + λ t e t ) . . . mst and this proves the first part. In order to extend this morphism to B it is necessary and sufficient to have g s (1 + λ s e s ) invertible for all s ∈ S. Since g s is invertible, this means (1 + λ s e s ) invertible. 4.2. Description in type A 1 , and beyond for generic λ. If W has type A 1 , the algebra C W (u) can be described by two generators g, e and relations e 2 = e, ge = eg, g 2 = 1 + (u − 1)e(1 + g). We know that it is a free module with basis 1, e, g, eg. We let a 0 = (1 + g)(1 − e), a 1 = e(1 + g), , then a 0 , a 1 , a 2 , a 3 is again a basis over k. It is made of eigenvectors for g and e. The eigenvalues are a 0 a 1 a 2 a 3 e 0 1 0 It follows that g + λge has eigenvalues 1, u(1 + λ), −1, −1 − λ. The discriminant of its characteristic polynomial (X − 1 − λ)(X − u(1 + λ))(X + 1)(X + 1 + λ) is When this discriminant vanishes, and over a domain, g + λge satisfies a cubic relation, because 2 of the 4 eigenvalues are equal. When it is invertible, g + λge generates the whole algebra. As a consequence, we get for an arbitrary Coxeter group W the following. = 0 is known to be a free deformation of the group algebra of the group Γ 3 = Q 8 ⋊ (Z/3Z), where Q 8 is the quaternion group of order 8 (see [30]). Moreover it is known to be a symmetric algebra, with explicitely determined Schur elements. Specializing a, b, c to 1, −1, u we get from [33] that, when −1, u) is a semisimple algebra, isomorphic to kΓ 3 , possibly after some extension of scalars. We know that H 3 (1, −1, u) is a free module of rank |Γ 3 | = 24 and that C A2 (u) as rank 30. Over the field k = Q(u), the image of the natural map H 3 (1, −1, u) → C A2 (u) can be easily computed, starting from a basis of H 3 (1, −1, u). We get a vector space of dimension 20. Therefore, this image is the quotient of H 3 (1, −1, u) by one of its three 2-sided ideals corresponding to its simple modules of dimension 2. This quotient also appears in the study of the Links-Gould invariant, see [32]. This incites to look at skein relation of braid type satisfied by the Links-Gould invariant on 3 strands. Ishii has established ( [18] and also private communication, 2012) that, besides a cubic relation of the form (σ i − t 0 )(σ i − t 1 )(σ i + 1) = 0, the Links-Gould invariant vanishes on the following relation 1 + s 2 s 1 From explicit calculations inside H 3 (1, t 0 , t 1 ) one checks that this relation is non-trivial in this algebra. Therefore it is a generator of the simple ideal defining the Links-Gould quotient LG 3 in the notations of [32]. Another relation communicated by Ishii is the following one. One checks similarly that it is nontrivial in H 3 (1, t 0 , t 1 ). By explicit computations inside C A2 (u), one checks that both relations are valid there. For the second one one neeeds to specialize at {t 0 , t 1 } = {1, u}. This proves Proposition 4.3. The two relations above are satisfied inside C A k (u) (and therefore inside Y d,k+1 (u)), for all k ≥ 2. Moreover, if k is a field and ∆(u) = 0, then the image of kB 3 inside the algebra C A2 (u) is semisimple, has dimension 20, and can be presented by generators s 1 , s 2 , and the braid relations together with the cubic relation (s 1 − 1)(s 1 + 1)(s 1 − u) = 0 and one of the two relations above. The study of the algebra for a higher number of strands cannot be continued using the same methods as in [32], because the cubic quotient H 4 (1, −1, u), though still being finite dimensional, is conjecturally not semisimple. Indeed, the Schur elements of a conjectural symmetric trace for H 4 (a, b, c) -as defined and described e.g. in [10] -were computed and included in the development version of the CHEVIE package for GAP3 (see [34]), and some of them vanish when (a + b)(a + c)(b + c) = 0. We computed the dimension of the algebra generated by the braid generators inside C A k (u), k ∈ {3, 4} for a few rational values of u (including, for k = 4, u ∈ {17, 127, 217}). We obtained 217 for k = 3 and 3364 for k = 4. This sequence 3, 20, 217, 3364 of dimensions does not appear for now in Sloane's encyclopaedia of integer sequences, so we could not extrapolate a general formula from this. 4.4. Positive representation of the braid monoid for λ = −1. When λ = −1, the images of the Artin generators still satisfy the braid relations, but they are not invertible anymore. Therefore, they define a representation of the positive braid monoid, or Artin monoid, that we denote B + . We denote b s = g s − g s e s the action of s ∈ S. We have b 3 s = b s , and a straightforward computation shows that, for all J ∈ P f (W) and w ∈ W , we have It is remarkable that this action does not depend on the parameters u s anymore. Moreover, when W is finite, we can convert it to a linear action with positive coefficients, as follows. Composing through the natural projection C W (u) → C In particular, if g ∈ B + is divisible by s ∈ S, then g.y [J],w = 0 for all s ∈ [J]. Therefore, one could hope that this representation g → b g of B + is initially injective in the sense given by Hée in his analysis of Krammer's faithfulness criterium (see [24]), meaning that b g determines the leftmost (or rightmost) simple factor of g. This would imply that the representation s → g s + λg s e s of B is faithful, for generic λ. However, this is not the case : in type A 2 , with generators s, t, a straightforward computation shows that b 2 s while s 2 t 3 s 2 is divisible by s and not by t (on both sides), while ststs = tstts = sttst is divisible by s and t on both sides. Finally, we remark that this representation with positive coefficients cannot be readily transposed to infinite Coxeter groups. Indeed, although the intersection of all parabolic subgroups containing a finitely generated reflection subgroup of W is a parabolic subgroup, and therefore the notion of parabolic closure remains well-defined, the relation rk([J ∪ {r}]) = rk([J]) + 1 whenever r ∈ [J] fails. The following easy example was communicated to me by T. Gobet. Let (W, S) be an affine Coxeter group of typeà 2 , and S = {s, t, u}. Let J = I = {s} and r = tut = utu. Then s, t is an infinite dihedral group, whose parabolic closure is W , because every proper parabolic subgroup of W is finite. Therefore rk[J ∪ {r}] = 2 + rk[J] in this case. Generalization to complex reflection groups Let W < GL(V ) be a finite complex reflection group, R its set of pseudo-reflections, W parab the collection of its parabolic subgroups, defined as the fixers of some linear subspace of V . We let A = {Ker (s − 1), s ∈ R} denote the associated hyperplane arrangement, X = V \ A the hyperplane complement and B = π 1 (X/W ) its braid group. Without loss of generality we may assume that A is essential, meaning A = {0}. We let L denote the lattice of the arrangement, formed by the intersections of reflecting hyperplanes. There is a 1-1 correspondence L → W given by L → W L where W L = {w ∈ W ; w |L = Id |L }. This bijection is an isomorphism of lattices, and it is equivariant under the natural actions of W . 5.1. Generalization of C p W (1), and a monodromy representation. For k an arbitrary unital commutative ring, we let kW parab = kL denote the commutative algebra spanned by a basis of idempotents e G , G ∈ W with relations e G1 e G2 = e [G1,G2] , where [A] denotes the parabolic closure of A, that is the fixer of the fixed point set of A ⊂ W . Equivalently, it is spanned by idempotents e L , L ∈ L with relations e L1 e L2 = e L1∨L2 , where e L = e WL . In particular, e s = e Ker(s−1) for all s ∈ R. This algebra is naturally acted upon by W , through w.e G = e wGw −1 , or equivalently w.e L = e w(L) . We define C p W (1) as the semidirect product W ⋉ kW parab ≃ W ⋉ kL. It is again acted upon by W through w.(w 1 .e G ) = (ww 1 w −1 )e wGw −1 . Applying proposition 3.7, we have the following analogue of theorem 3.8. Proposition 5.1. Let W be a finite complex reflection group, and k be a field. The algebra C p W (1) is isomorphic to k L ⋊ kW . Moreover, if char. k does not divide |W |, then C p W (1) is semisimple. It is split semisimple as soon as the group algebra kN W (W 0 ) is split semisimple for all W 0 ∈ W parab , where N W (W 0 ) denotes the normalizer of W 0 inside W . We let T denote the holonomy Lie algebra of the hyperplane complement V \ A. Recall from [26] that it is presented by generators t H , H ∈ A and relations [t H0 , t E ] = 0 for all H 0 ∈ A and E a codimension 2 subspace contained in H 0 inside the hyperplane lattice (such a subspace is called a flat), where t E = H⊃E t H . It is acted upon by W through w.t H = t w(H) . For H ∈ A we let W H = {w ∈ W ; w |H = Id H } ∈ W parab . It is a cyclic group of order m H ∈ Z ≥2 . It contains a unique generator s H with eigenvalue exp(2iπ/m H ), that we call the distinguished reflection associated to H ∈ A. We remark that, if H 2 = w(H 1 ) for some w ∈ W , then e H2 = e H1 and ws H1 w −1 = s H2 . The following simple fact will be crucial for us. We state it as a lemma. We notice that ss i The Hecke algebra H W (a) of W is defined as the quotient of kB by the relations σ mH = mH −1 k=0 a H,k σ k for any braided reflection σ associated to s H ∈ R with H ∈ A. We remark that the algebra kL admits an augmentation map kL → k defined by e L → 1, which is split through 1 → e W = e {0} . From this the following is immediate. Proposition 5.6. The maps kB → H W (a) and kL → k together induce an algebra morphism B ⋉ kL → H W (a) ⊗ k k = H W (a). It factorizes through a surjective algebra morphism p : Proof. For instance by invertibility of the Vandermonde determinant, one can find complex scalars λ H,i such that 0≤i<mH λ (i) H (ζ H ) ri ) = τ H,r for 0 ≤ r < m H , with ζ H = exp(2iπ/m H ). We consider the monodromy morphism R : kB → C p W (1) constructed above. The image of a braided reflection σ associated so s H has eigenvalues 1, ζ and ζ r exp(hτ H,r ) = v H,r . For instance by using Chen's iterated integrals, we notice that, where A 0 is the subalgebra of C p W (1) generated by the se H , for s ∈ R and H = Ker (s − 1). Lemma 5.2 implies that A 0 commutes with all e L , L ∈ L. Therefore, we have It remains to prove that the defining relations of C p W (a) are satisfied. Let H ∈ A, s = s H and σ a braided reflection associated to them. For short, let S = R(σ) and S 0 = s exp(hϕ(t H )). We have S = P s exp(hϕ(t H ))P −1 for some P ∈ A 0 [[h]]. Since ϕ(t H ) commutes with A 0 , we get S m = P exp(mhϕ(t H ))P −1 = 1 + P (exp(mhϕ(t H )) − 1)P −1 . We have exp(mhτ H,r ) = v mH H,r and v mH H,r − i a H,i v i H,r = i (v H,r − v H,i ) = 0. Now, the compared spectrum of the elements in play is as follows and this proves the claim. We remark that proposition 4.1 admits no direct generalization to the complex reflection groups setting, namely there is not in general a 1-parameter family of morphisms B → C p W (a) of a similar form. Indeed, let us consider for W the group generated by 2-reflections called G 12 in the Shephard-Todd classification. Its braid group has the presentation s, t, u | stus = tust = ustu and W = B/ s 2 , t 2 , u 2 . Letting e x ∈ C p W (a) denote the idempotent associated to the hyperplane Ker (x−1), for x ∈ W a reflection, one can check that there can be a morphism B → C p W (a) satisfying y → y + λe y y, for y ∈ {s, t, u} only if the 4 reflecting hyperplanes associated to the reflections {s, sts, stuts, stusuts} are the same as the ones associated to the reflections {t, tut, tusut, tustsut} (equivalently, that these two sets of 2-reflections are equal). One readily checks that this does not hold. 5. 3. An extended freeness conjecture. For a W -orbit of hyperplanes c, the order m H of W H for H ∈ c depends only on c. Therefore, we can denote it m c , and define a generic ring R W = Z[(a c,i , a −1 c,0 ] for c ∈ A/W and 0 ≤ i < m c . The generic algebra C p W is defined over the ring k = Z[(a c,i , a −1 c,0 ] as in definition 5.4 by letting a H,i = a c,i if H ∈ c. Proposition 5.8. If the algebra C p W is spanned by |W |.|L| elements as a R W -module, then it is a free R W -module of rank |W |.|L|. Proof. The proof follows exactly the same lines as in [6] (proof of theorem 4.24), see also [31] proposition 2.4, the 'monodromic' ingredient being given by proposition 5.7 above. It is left to the reader. We consider C p W (a) as a kB-module. As a kB-module, it is generated by the e L , L ∈ L. Let Lemma 5.9. If each E L is spanned as a k-module by |W | elements of the form b.e L , b ∈ B, then C p W (a) is spanned by |W |.|L| elements, and therefore it is a free k-module of rank |W |.|L|. Proof. Assume that, for each L, we have elements b L,w , w ∈ W such that E L is spanned by the b L,w .e L . We shall prove that C p W (a) is spanned by the b L,w .e L for L ∈ L, w ∈ W . Since C p W (a) is generated as a kB-module by the e L , L ∈ L, it is spanned as a k-module by the be L , L ∈ L. Therefore, it is sufficent to prove that such a be L0 is a linear combination of the b L,w .e L , L ∈ L. We prove this by induction on L 0 with respect to the well-ordering provided by the lattice L. If L 0 = {0}, then b.e L = b.e W ∈ E L = E and we have the conclusion by assumption. If not, we know that there exists scalars α L0,w , w ∈ W such that x = b.e L0 − w∈W α L0,w b L0,w .e L0 ∈ E ′ L0 . By the induction assumption we can write x as a linear combination of the b L,w e L for L L 0 , and therefore b.e L0 as a linear combination of the b L,w e L for L ⊂ L 0 , and this proves the claim. We notice that the action of kB on E {0} = E {0} factorizes through H W (a), and therefore E {0} is spanned by |W | elements if and only if the BMR freeness conjecture is true for W . We also notice that the action of kB on E V factorizes through the regular representation of kW , hence E V is clearly spanned by |W | elements. In this way, the presumed fact that each E L is spanned by |W | elements appears as an intermediate between the trivial fact that kW has this property and the BMR freeness conjecture that H W is spanned by |W | elements. For a given L = {0}, and if true, it should be easier to prove than the freeness conjecture for H W , since, at each stage, the relation g m s = . . . to be used can be either the complicated (Hecke) one or the trivial one (g m s = 1). However, it does not seem to readily follow from it, and therefore we propose it as a (a priori stronger) conjecture. Conjecture 5.10. (extended freeness conjecture) The algebra C p W (a) is a free k-module of rank |W |.|L|. Moreover, each module E L , L ∈ L, is spanned by |W | elements of the form b L,w .e L , w ∈ W , with b L,w ∈ B mapping to w ∈ W under the natural map B ։ W . If C p W is a R W -module of rank |W |.|L|, then it is a free deformation of the algebra C W (1), which is semisimple for k = Q by proposition 5.1. Therefore, Tits' deformation theorem (see e.g. [23], §7.4) and proposition 5.1 imply the following, where K W denotes a field containing R W . Proposition 5.11. If the extended conjecture is true, then C p W ⊗ RW K W is semisimple. If moreover K W is algebraically closed, then C p W ⊗ RW K W ≃ C p W (1) ⊗ K W ≃ K L W ⋊ K W W . If W has rank 2 and the BMR freeness conjecture is true for W , the proof is reduced to the consideration of the E H for H ∈ A. Since gbe L g −1 = gbg −1 e π(g)(L) for all g ∈ B, we moreover need to consider only one hyperplane per W -orbit. 5.4. The case of G 4 . The smallest non-trivial example of an irreducible non-real complex reflection group outside the infinite series of monomial groups is the group Q 8 ⋊ Z 3 denoted G 4 in Shephard-Todd notation. It is also the group for which the original BMR freeness conjecture has had, so far, the more topological applications (see e.g. [32,33]). In this case B = s 1 , s 2 | s 1 s 2 s 1 = s 2 s 1 s 2 is the Artin group of type A 2 (a.k.a. the braid group on 3 strands) and W is the quotient of B by the relations s 3 1 = s 3 2 = 1. A proof of the original BMR freeness conjecture for this case can be found for instance in [30]. We let B = {1, s ε 1 e 1 by (4), which belongs to Be 1 by (1). This proves conjecture 5.10 for W = G 4 . 5.5. An extended Ariki-Koike algebra. Let W = G(d, 1, n) be the group of n × n monomial matrices with entries in µ d (C) ∪ {0}. In this case H W is known as the Ariki-Koike algebra, and B is the Artin group of type B n /C n , with generators t = a 1 , a 2 , . . . , a n . The images of a i and a j inside W satisfy an Artin (braid) relation of length 4 if {i, j} = {1, 2}, 2 if |i − j| ≥ 2, and 3 otherwise. If we abuse notations by letting e b , b ∈ B be equal to e Ker (β−1) for β ∈ W the image of b, we have inside C p W the relations t d = 1 + (q − 1)e t P (t) for some polynomial P of degree at most d − 1, and a 2 i = 1 + (q − 1)(a i + 1)e ai for i ≥ 2. We adapt the arguments of [2] to prove conjecture 5. 10. First of all, we let t 1 = t, t i = a i t i−1 a i . There is a classical injective morphism B → B n+1 , where B n+1 is the usual braid group on n + 1 strands, given by t → σ 2 1 , a i → σ i+1 for i ≥ 2, where σ 1 , . . . , σ n+1 denote the classical Artin generators. Under this map, each t i is mapped to δ i+1 = σ i+1 σ i . . . σ 2 σ 2 1 σ 2 . . . σ i+1 , and δ i = z i+1 /z i where z i = (σ 1 . . . σ i−1 ) i is the canonical generator of Z(B i ). From this, we have the following relations inside B, and therefore inside C p W : (1) For all i, j with j ∈ {i, i + 1} we have a j t i = t i a j (2) For all i, j we have t i t j = t j t i (3) For all i, we have a i t i−1 t i = t i−1 t i a i . Let E denote the (commutative) subalgebra of C p W generated by the e H , H ∈ A. Note that Eb ⊂ bE for all b ∈ B. The above equalities moreover imply that the t i , i ≥ 1 generate a commutative subalgebra of C p W . We prove the following lemma. Lemma 5.13. For all k ≥ 1, Proof. We prove (1) by induction on k ≥ 1. We first assume k = 1. We have a i t i = a 2 i t i−1 a i = t i−1 a i + (q − 1)(a i + 1)e ai t i−1 a i = t i−1 a i + (q − 1)e ai t i−1 a i + (q − 1)a i e ai t i−1 a i = t i−1 a i + (q − 1)t i−1 a i e a −1 E and this proves (1). We now prove (2) by induction on k ≥ 1. If k = 1, then E and this proves (2). Since a 2 , . . . , a n satisfy the braid relations in type A n−1 , by Iwahori-Matsumoto theorem we know that, for each g ∈ S n there is a well-defined a g ∈ B such that a g = a i1 . . . a ir for every reduced decomposition g = s i1 . . . s ir with s im = (m, m − 1). We note that, for each i ≥ 2, a i a g ∈ h∈Sn a h E, as a consequence of the corresponding inequality inside C Sn (u). From this we prove that C p W = g∈Sn 0≤k1,...,kn≤d t k1 1 . . . t kn n a g E Indeed, the RHS contains 1 and is clearly stable by left multiplication under • a 1 = t = t 1 , by the order relation t d = 1 + (q − 1)P (t)e t • a 2 , . . . , a n by lemma 5.13 and the fact that a i a g E ⊂ h∈Sn a h E for all i ≥ 2. Since E is spanned by |W p | elements, and |W | = d n n!, this proves that the assumption of proposition 5.8 is satisfied, and this proves conjecture 5.10 for W = G(d, 1, n).
2017-01-13T11:06:28.000Z
2016-01-13T00:00:00.000
{ "year": 2016, "sha1": "21af9486057c6e30bf7aa29b0e99cc96c3b70eac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1601.03191", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "21af9486057c6e30bf7aa29b0e99cc96c3b70eac", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
46932028
pes2o/s2orc
v3-fos-license
The Cumulative Impact of Harm Reduction on the Swiss HIV Epidemic: Cohort Study, Mathematical Model, and Phylogenetic Analysis Abstract Background Human immunodeficiency virus (HIV) transmission among injecting drug users (IDUs) is increasing in the United States due to the recent opioid epidemic and is the leading mode of transmission in Eastern Europe. Methods To evaluate the overall impact of HIV harm reduction, we combined (1) data from the Swiss HIV Cohort Study and public sources with (2) a mathematical model expressed as a system of ordinary differential equations. The model reconstructs the national epidemic from the first case in 1980 until 2015. Phylogenetic cluster analysis of HIV-1 pol sequences was used to quantify the epidemic spillover from IDUs to the general population. Results Overall, harm reduction prevented 15903 (range, 15359–16448) HIV infections among IDUs until the end of 2015, 5446 acquired immune deficiency syndrome (AIDS) deaths (range, 5142–5752), and a peak HIV prevalence of 50.7%. Introduction of harm reduction 2 years earlier could have halved the epidemic, preventing 3161 (range, 822–5499) HIV infections and 1468 (range, 609–2326) AIDS deaths. Suddenly discontinuing all harm reduction in 2005 would have resulted in outbreak re-emergence with 1351 (range, 779–1925) additional HIV cases. Without harm reduction, the estimated additional number of heterosexuals infected by HIV-positive IDUs is estimated to have been 2540 (range, 2453–2627), which is equivalent to the total national reported incidence among heterosexuals in the period of 2007 to 2015. Conclusions Our results suggest that a paramount, population-level impact occurred because of the harm reduction package, beyond factors that can be explained by a reduction in risk behavior and a decrease in the number of drug users over time. Human immunodeficiency virus (HIV) transmission via injecting drug use remains one of the leading modes of transmission in Eastern Europe and many Asian countries (eg, China, Indonesia, Iran), and it is recently re-emerging in the United States as a result of the growing heroin epidemic, which is driven by overprescription of opioid analgesics [1][2][3]. Despite a large body of evidence on the effectiveness of harm reduction measures to halt the spread of HIV among people who inject drugs, there is still a large heterogeneity in the estimates [4]. These measures also remain politically controversial and are far from being universally implemented and accepted [5,6]. As a result, the harm reduction coverage is still extremely low across the world and lags behind World Health Organization (WHO) targets [7]. From the early 1980s, Switzerland experienced one of the heaviest burdens of drug addiction (mainly heroin and cocaine) in Europe, which manifested in the emergence of large open drug scenes such as the "Platzspitz" ("Needle-Park") in Zürich. This resulted in an outbreak of HIV, hepatitis B virus (HBV), and hepatitis C virus (HCV) infections in this risk group, and Switzerland had the highest acquired immune deficiency syndrome (AIDS) incidence rate in Europe in 1988 [8]. After a growing public outcry, and considering the failure of repressive measures as the main response tool, a new progressive drug policy was gradually implemented that was based on "four pillars": prevention, therapy, harm reduction, and law enforcement [9]. The main harm reduction measures included the following: (1) extensive needle-exchange programs, ie, on-site distribution at open drug scenes, pharmacies, and syringe vending machines; (2) supervised drug consumption rooms; (3) low-threshold methadone programs; and (4) since 1994, a supervised, injectable medicinal heroin program. In parallel, a wide-reaching "STOP AIDS" campaign was launched with a tailored message for drug users, which emphasized the HIV risk in needle sharing [10]. Furthermore, HIV-infected current and former injecting drug users (IDUs) had broad access to antiretroviral drug treatment programs [11,12]. From the public health perspective and particularly regarding HIV transmission, those efforts proved to be a phenomenal success. Despite a relatively low cessation rate of drug use and despite the fact that the prevalence of heroin addiction remained relatively stable [13], the transmission of HIV among IDUs in Switzerland dropped from a peak of 937 new cases in 1989 to a low of 2% (9 of 519) of all new infections in 2014, hence almost eliminating HIV transmission among IDUs. To date, a quantitative evaluation of the cumulative impact of the implemented harm reduction measures has not been performed. In this study, we combine a mathematical model with the data from the Swiss HIV Cohort Study (SHCS), the SHCS drug-resistance sequence database, national epidemiological data, and data from previous works to perform the following: (1) estimate the counterfactual HIV incidence and prevalence among IDUs in absence, or with delayed introduction, of the harm reduction measures; (2) examine the effect of discontinuing harm reduction measures when the HIV epidemic among IDUs appears as under control; and (3) estimate the cumulative effect of the implemented harm reduction measures on the spillover of the epidemic to the general population in Switzerland. Ethics We obtained ethical approval from the SHCS and written informed consent from all participants. Swiss HIV Cohort Study and the Drug Resistance Database The SHCS is an ongoing prospective cohort of HIV-positive individuals. The study prospectively enrolled patients since 1988, and some data were retrospectively ascertained until 1981. During the biannual outpatient visits, comprehensive clinical and behavioral data are collected [14]. In addition, for more than 60% of the participants, partial pol sequences are available. The representativeness of the SHCS was estimated to be high, with good coverage of marginalized and hard-to-reach populations, and is particularly good for subtype B, which is the predominant subtype in Switzerland [15]. Mathematical Model We constructed a compartmental, deterministic transmission model represented as a nonlinear system of 32 ordinary differential equations (Figure 1). The model reconstructs the epidemic from the first introduced HIV case into the IDUs population in 1980 and is numerically solved until 2015. The modeled population corresponds to all heroin users in Switzerland. The model is divided into 3 meta-strata that represent a typical progression of an addiction course: (1) "non-injectors" represent people who smoke or snort heroin; (2) "active injectors" represent populations at risk of infection with HIV by sharing injection paraphernalia; and (3) "past-injectors" represent people covered by harm reduction that are still addicted to opioids, but do not contribute to the infectious pool anymore, since they switched to snorting/smoking or are in a methadone or supervised heroin program and permanently ceased injecting in a setting that facilitates transmission. The active injectors and the past-injectors are stratified into HIV susceptible and infected. All infected IDUs start in the undiagnosed compartment and can be diagnosed either in recent, chronic, or AIDS stage, with different rates. Since 1996, diagnosed individuals can transit to a combined antiretroviral treatment (cART) treated stage, with rates that depend on the disease stage and are increasing with calendar year to reflect transition to immediate treatment. Those rates were estimated from the SHCS based on CD4 counts as a proxy for disease stage (Supplementary Table 3). Except for past-injectors, which are covered by harm reduction by definition, each compartment is mirrored by a parallel harm reduction-covered strata to which individuals transit with an average rate that represents the harm reduction recruitment rate. Because the different harm reduction layers were overlapping in time (see Supplementary Figure 1), we do not model the separate effect of each measure (methadone, needle exchange, supervised injectable heroin, etc), but we use a harm reduction "package" [7] that was introduced in 1988, which means being covered by any of the harm reduction measures versus being missed by all of them. The exception to this pooled consideration of harm reduction is the restricted methadone program, because this was the main available measure before the introduction of the package, which allowed us to disentangle its effect. We assumed that IDUs covered by harm reduction had lower HIV transmission coefficient. This transmission rate, the harm reduction package recruitment rate, and other model parameters were determined by fitting the model using negative log-likelihood-distributed error to the annual number of new HIV cases and AIDS deaths in IDUs that were reported to the Swiss Federal Office of Public Health. See the Supplementary Data for a detailed description of the model, parametrization, and sensitivity analysis. Phylogenetic Analysis A large maximum likelihood phylogenetic tree with 19 604 Swiss sequences and 90 994 non-Swiss background sequences was constructed as previously described [16]. Introduction events into the general (heterosexual) population that originated from IDUs were detected by extracting all clusters that comprised only Swiss sequences and had at least 1 IDU and 1 heterosexual individual. For each IDU, the tree nodes were traversed back until the cluster either contained another IDU individual or a risk group that is other than an IDU or heterosexual, then the largest previous cluster was returned. This way, our analysis estimated not only the spillover population but also the further transmission of HIV within the heterosexual population caused by that spillover population. See the Supplementary Data for a detailed description of cluster analysis and the spillover calculation. Any harm reduction To harm reduction Analysis Tools Statistical analysis was performed with R (version 3.2.3). The system of equations was solved using the package deSolve (version 1.14); the package "ape" (version 4.1) was used for phylogenetic analysis. Injecting Drug Users in the Swiss HIV Cohort Study Between 1983 and 2016, 4806 IDUs were enrolled in the SHCS, 3311 of those were most likely infected with HIV through sharing infected paraphernalia, and the remaining 1495 might have been infected via sharing or via sexual route. The number of newly enrolled IDUs decreased with time from 553 newly registered in 1990, to 17 in 2016 (P for trend <.0001; Figure 2), hence accurately reflecting the drop of HIV incidence among IDUs in Switzerland [17]. The period prevalence of HBV and HCV coinfections was high, with 78.3% (2312 of 2954; 1862 not tested) and 94.6% (2728 of 2883; 1923 not tested), respectively, for the entire period, and 58.2% (39 of 67; 2 not tested) and 73.1% (49 of 67; 2 not tested) in the last 7 years. This high prevalence of HBV and HCV alludes to a high fraction of nonassortative needle sharing, as expected in an open drug scene and assumed in our model. Model Performance The proposed model exhibits a qualitatively good fit, both to the annual number of newly diagnosed HIV cases among IDUs in Switzerland and to the annual number of reported AIDS deaths (Figure 3a and b, respectively). The model also catches, for the most part, the assumed dynamic of the population of problematic heroin users in Switzerland, with a peak during early 1990s and a subsequent gradual decline (Figure 3c). Finally, the model predicts HIV prevalence among IDUs in Switzerland, which falls in line with published estimates of approximately 10% between 1993 to 2000 [18]. Human immunodeficiency virus prevalence and the number of heroin users were deliberately not used for model fitting, to serve as an additional quality check for the prudence of our model; nevertheless, the dynamics of those compartments is captured well by the model. The Combined Effect of Harm Reduction Measures, No Harm Reduction, and Sudden Discontinuation First, we examined the extreme-yet relevant to other countries-worst-case scenario of no harm reduction at all since 1980, which required transferring the individuals on restricted methadone-that was available since late 1970s-to model compartments not covered by any harm reduction, from the start of the simulation. This Only Restricted Methadone and No New Recruitment Since 2000 Next, we examined the effect of not introducing the extensive harm reduction package but continuing with high-threshold methadone only, with the same recruitment rate as before 1988 (~8.5% per year; Supplementary Data). This would have resulted in 11 462 additional HIV cases (range, 10 399-12 526) (Figure 4i and Figure 5) and 3190 (range, 2793-3588) additional AIDS deaths (Figure 4j). It is notable that restricted methadone is still superior to the scenario with no harm reduction at all, with 4441 prevented cases and 2256 fewer deaths. Finally, we explored a less radical discontinuation scenario, in which individuals that are covered by harm reduction remain in the covered compartment (with the same dropout rate); however, since 2000, there is no new recruitment to the harm reduction covered compartments. This scenario emulates a harm reduction budget cut plot. This would have resulted in a slow re-emergence with 1616 additional HIV cases (range, 938-2295) (Figure 4i) with no substantial increase in additional AIDS deaths (range, 114-235) (Figure 4j). The Effect of Combined Antiretroviral Treatment Although, chronologically, the epidemic reached its peak and began to decline before cART introduction in 1996, we still observe a moderate protective effect of cART (the harm reduction-related parameters were not changed in this scenario), with 771 (range, 401-1142) new HIV cases prevented by cART alone until the end of 2015, and-as expected-an ample effect on AIDS deaths, with 1771 (range, 991-2552) prevented deaths ( Figure 5 and Supplementary Figure 3). Spillover to the General Population The phylogeny contained 4235 sequences from 2399 SHCS IDUs, with 94.3% (2262 of 2399) harboring subtype B. Cluster analysis showed 499 heterosexuals clustered with IDU in Swissonly clusters, which were linked to 358 putative cross-riskgroup introduction events (Supplementary Figure 4) in which the phylogenetically closest IDU was male in 60.3% (216 of 358) and female in 39.7% (142 of 358). In absence of any harm reduction (scenario a), the estimated additional number of heterosexuals whose infection originated from HIV-positive IDUs is estimated to have been 2540 (range, 2453-2627) new infections, which is comparable to the total national HIV incidence among heterosexuals in the entire period from 2007 to 2015 (n = 2476, Federal Office of Public Health [19][20][21]). DISCUSSION According to UNAIDS, in Eastern Europe and Central Asia, 51% of all newly diagnosed HIV is attributed to people who inject drugs [22]. However, only 7% to 15% of all IDUs in Eastern Europe have access to needle and syringe programs, whereas for opioid substitution treatment the coverage is approximately 1% [3], and it remains illegal in Russia. Likewise, the Western-Europe, North-America, and Australasia region combined have not yet reached the WHO middle-coverage target of 20% for needle and syringe programs [7]. Our model estimates that a very high prevalence of HIV (~50%) among IDUs would have occurred in the absence of harm reduction. More importantly, our model takes into account both the overall decrease in heroin consumption as well as the decreasing number of injectors. Thus, the high prevalence in the absence of harm reduction is predicted to have occurred despite those general trends of drug use. This counterfactual estimate is also in line with historical seroprevalence data from socioeconomically comparable areas that had little to no harm reduction at that time. Frankfurt, Germany, had a large open drug scene, with HIV prevalence of 73.7% in 1994 [23], in Spain the prevalence was 63% in 1996 [24], and in northern Italy the prevalence was 49% in 1989 [25]. Some areas in the United States also exhibited high HIV prevalence, with 61% in New York [26] and 60% in New Haven, Connecticut [27], during the early 1990s. In Eastern Europe, and especially in Russia, which exercises a repressive approach toward IDUs and repulsion of the harm reduction concept on the political level, a 37% prevalence was estimated in 2003 [3]. In Estonia, the rate was as high as 72% [3]. Considering the low incidence of HIV among IDUs in Switzerland in the recent years, there is a growing debate on whether the funds invested in harm reduction can be safely allocated elsewhere. In 2016, the canton of Zürich decided to cut 4.5 million Swiss Francs from the drug-addiction treatment programs until 2019 [28]. Our study shows that suddenly stopping harm reduction measures, even several years after the epidemic appears as under control, can lead to a new outbreak. This result is supported by a recent experience from Greece, a country with a historically low HIV incidence among IDUs (1.5% to 4.5% of all new infections during 2000-2010) [29]. In 2011, due to the fiscal crisis and severe austerity, the harm reduction measures were underperforming [30]. Until the first 8 months of 2013, 1000 new HIV cases among IDUs have already been diagnosed [31]. After harm reduction-in form of needle and syringe exchange and opioid-substitution-was scaled up again, HIV incidence was reduced 5-fold within 1 year [32]. Our estimates show a moderate impact of cART on curbing HIV transmission among IDU in Switzerland. This can be attributed to 2 factors: (1) cART was introduced in 1996 after the epidemic was already contained by the harm reduction measures, which started in part in 1988; and (2) the effect of cART is partly undermined by lower adherence among IDUs [33], which was also reflected in our model. However, as expected, cART prevented a large number of AIDS deaths among IDUs. Our model has several benefits and can be adjusted for the following factors: (1) the decrease in the number of injectors with time; (2) the possible reduction in risk behavior even in people who are not reached by any harm reduction due to overall awareness of HIV [34]; and (3) the decrease in needle sharing by IDUs who are aware of their HIV-seropositive status and are concerned of infecting others [35]. Because the extent of the relevance of those developments to the Swiss settings is uncertain, we speculate that we might have underestimated the effectiveness of the combined harm reduction measures and that our estimates lay on the conservative side. This is further supported by the fact that, due to scarcity of data, we could not account for cocaine-only injectors; however, injectors of cocaine and heroin ("Speedball") were accounted for in our model. In addition, our model has the advantage of being applicable to the current opioid analgesics-driven HIV epidemic, because it accounts for the transition from a noninjecting to injecting drug administration mode. Our model is limited because it does not differentiate between the different measures implemented, except for restricted methadone. However, in this work, we were a priori interested in cumulative estimates. Our model also only accounts for sexual transmission within but not between the 3 meta-strata. Nonetheless, the contribution of sexual transmission is expected to be of secondary importance due to an 8-fold higher per-act transmission probability for needle sharing [36]. Finally, as it is often in modeling studies, the uncertainty ranges of our predictions might be underestimated. Indeed, not all countries affected by an HIV epidemic among IDUs possess the resources that were available in Switzerland. However, the unit costs of harm reduction interventions are relatively low and are estimated to be highly cost effective [7] and, in light of the results presented here, might even be cost saving. In addition, we demonstrated that the benefits of harm reduction extend beyond the population of IDUs, with thousands of averted spillover heterosexual infections. Similar studies are needed for the HCV epidemic, which affects this population even more severely than HIV. CONCLUSIONS In summary, our results highlight, based on the Swiss experience, the pivotal role of harm reduction for successful curbing of HIV transmission among IDUs and prevention of grave repercussions for the general population.
2018-06-21T14:10:37.613Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "b9461829c0cbf23dcb82ab836b9ac527f9c14b72", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/5/5/ofy078/33591253/ofy078.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9461829c0cbf23dcb82ab836b9ac527f9c14b72", "s2fieldsofstudy": [ "Medicine", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
90697519
pes2o/s2orc
v3-fos-license
Cytotoxicity of N-nitrosoguanidines in a breast cancer cell model Nitric oxide (NO) is a biologically important molecule with diverse functions in the human body. In recent years, there has been a growing interest in the potential use of NO donors in cancer therapy, since several studies have shown that the regulation of NO levels influences various pro or antitumor processes. The anticancer effects of NO donors have already been described in different in vitro and in vivo studies. The aim of this preliminary study was to evaluate the antitumor potential of two compounds of the N-nitrosoguanidines family, L1 (1-nitroso-1methyl-3-benzoylguanidine) and L2 (1-nitroso-1-methyl-3-tolylsulfonylguanidine). The compounds did not show significant in vitro cytotoxicity in the human breast cancer cell model MDA-MB-231. However, further studies will be needed to duly elucidate their antitumor potential. The use of other cancer cell models and the combination of L1 and L2 with radiotherapy or with chemotherapy agents may constitute interesting approaches for future studies. Introduction NO is a ubiquitous signaling, regulatory and effector molecule with various biological effects on the human body at the level of the vascular, neuronal and immune systems (1)(2)(3).NO is synthesized in vivo by three different NO synthases (NOS), including nNOS and eNOS, which are constitutively expressed in neuronal and endothelial cells, respectively, and iNOS which is transcriptionally regulated and induced by inflammatory cytokines, oxidative stress, hypoxia and some endotoxins (3).The wide range of NO-mediated functions occurs mostly through a cGMP-dependent pathway, leading to vasodilation, neurotransmission, inhibition of platelet aggregation, and relaxation of smooth muscle.A second cGMP-independent pathway is the NO reaction with molecular oxygen, reactive oxygen species (ROS), thiols, and transition metals.NO can also directly modify proteins without the intervention of enzymes by nitration or nitrosylation (2)(3).S-nitrosylation of thiol groups of cysteine residues is a reversible modification that is involved in several cellular signaling processes, which regulate the function of several intracellular proteins (2)(3).NO donors, such as nitroglycerin, have been approved for clinical use to treat cardiovascular diseases.Recently, the potential usefulness of such compounds in cancer therapy has been suggested (4).Regarding cancer, NO is known to have a dichotomic effect, with pro-or anti-cancer effects, depending on its concentration, microenvironment and cell type (2)(3).At low concentrations of NO (< 100 nM), there is an association with increased angiogenic and proliferative processes, as well as with resistance to apoptosis.On the other hand, high concentrations of NO (> 500 nM) are associated with increased cytotoxicity and apoptosis (1)(2)(3).The NO anti-tumor mechanisms include the induction of p53 protein (5), proteosomal degradation of anti-apoptotic molecules, cytochrome C release with increased mitochondrial permeability, Smac / DIABLO complex release, and direct S-nitrosylation of NF-kB, SNAIL and YY1 transcription factors (3,6).The role of NO in cancer therapies is related with hypoxia and hyponitroxia.Hypoxia is one of the standard features of solid tumors, leading to hyponitroxia by inhibition of NO synthesis.Hyponitroxia increases hypoxia by the lack of NO-modulated blood flow, in a mutual cycle that can promote tumor progression (3).Therefore, both O 2 and NO levels could be therapeutic targets for cancer.Various types of NO donors have been synthesized.Among those, S-nitrosothiols (general formula RSNO) have been studied for their antineoplastic properties (1).This class of RSNO-like compounds is generally unstable, but some compounds such as S-nitroso-Nacetylpenicillamine (SNAP) and S-nitrosoglutathione (GSNO) are more chemically stable and have documented antitumor activity in various cancer cell lines, such as in the HeLa line (cervical cancer), EMT-6 (murine mammary cancer) and colon cancer cells, with induction of apoptosis and cell cycle arrest, even in the presence of hypoxia (1).The aim of this study was to evaluate the anti-tumor potential of two compounds with proven ability to donate NO to thiols, including cysteine and glutathione, by transnitrosylation reactions (7), compound L1 (1-nitroso-1-methyl-3-benzoylguanidine) and compound L2 (1-nitroso-1-methyl-3-tolylsulfonylguanidine), shown in Figure 1. Cell culture The Crystal violet staining assay Cell viability was evaluated by the crystal violet (CV) staining assay.Approximately 6000 cells in 190 μL of culture medium per well were plated in 96-well plates and incubated for 24 hours.Cell were then exposed to compounds L1 and L2 (10, 25 and 50 μM) for 48 hours. The CV assay was carried out according to a previously described protocol (9).DMSO 5% (v/v) and the recognized anticancer drug doxorubicin (Dox, 10 μM) were used as positive controls.Two independent experiments were performed, each comprised of four replicate cultures. Results The obtained results show that compounds L1 and L2 did not induce significant cytotoxicity in the breast cancer cell line MDA-MB-231, in the range of concentrations tested and for the tested 48 hour incubation period.As can be seen in Figure 2A, only minor reductions in cell viability were found in cultures treated with L1 or L2.Conversely, the positive controls exhibited significant cytotoxicity for the same incubation period.DMSO 5% (v/v) decreased the cell viability to 24% (Figure 2B).The anticancer drug Dox, at 10 μM, decreased cell viability to 29% (Figure 2B).penicilina e 0,1 mg/mL de estreptomicina (8).As culturas celulares foram mantidas a uma temperatura de 37 °C, sob uma atmosfera húmida contendo 5% de CO 2 no ar. Discussion The potential anticancer effect of compounds L1 and L2 was evaluated in MDA-MB-231 cells, a human cell line representative of aggressive breast cancer.The results obtained herein suggest that L1 and L2 are devoid of considerable anticancer activity against these cells.However, for a more complete screening of the anticancer properties of L1 and L2 compounds, their cytotoxicity should be studied in additional cell models, representative of other cancer types since the levels of NO present vary between different cancers.A more extensive study of the incubation times with the compounds should also be performed to allow the detection of eventual mechanisms of delayed cell death.Further in vitro studies for the evaluation of the type of cell death and cell cycle may also provide information on the mechanisms involved in the biological effect of this type of compounds.There are several pre-clinical and clinical studies reporting a synergistic effect between NO-donor compounds and the conventional therapies for cancer.NO donors serve as sensitizing agents against radio or chemoresistant cells, lowering their hypoxia levels by regulating HIF-1α expression, and consequently increasing tumor oxygenation, which allows a greater efficacy in radiotherapy and chemotherapy for ROS generation with cytotoxic effects (3).A study by Matthews et al (10), demonstrates this premise, where the relationship between tumor hypoxia, endogenous NO levels and chemosensitivity of human and murine cell lines was verified.In that study two cell lines were used, the MDA-MB-231 line of human breast cancer and the B16F10 line of murine melanoma, with exposure to different levels of O 2 and incubation with NO inhibitors in the presence of doxorubicin or 5-fluorouracil, it being observed that under conditions of hypoxia and low levels of NO, resistance to chemotherapeutic drugs increased, translating into a greater survival of tumor cell lines.However with the addition of NO donors this effect was reversed even in the presence of small doses (10).Other in vitro studies on prostate cancer cell lines and a three-dimensional model of breast cancer relate the effect of NO donors on the level of hypoxia and chemosensitivity (11)(12)(13).An alternative mechanism of NO donors in combination with antitumor agents was reported by Chegaev et al (14), where it was observed that NO donors prevented the activation of proteins associated with drug resistance, such as the P-glycoprotein, in the HT29 colorectal cancer cell line resistant to doxorubicin (14).Considering the possible effect of NO-donors as therapy sensitizers, the combination of Discussão O potencial efeito antitumoral dos compostos L1 e L2 foi avaliado em células MDA-MB-231, uma linha celular humana representativa de cancro de mama agressivo.Os resultados obtidos neste estudo sugerem que L1 e L2 são desprovidos de atividade citotóxica considerável contra essas células.No entanto, para um rastreio mais completo das propriedades antitumorais dos compostos L1 e L2, a sua citotoxicidade deverá também ser estudada noutros modelos celulares, representativos de outros tipos de cancro, uma vez que os níveis de NO variam nos diferentes tipos de cancro.Um estudo mais extenso dos tempos de incubação com os compostos também deve ser realizado para permitir a detecção de eventuais mecanismos de morte celular tardia.Outros estudos in vitro para a avaliação do tipo de morte celular e do ciclo celular também poderão fornecer informações sobre os mecanismos envolvidos nos efeitos biológicos deste tipo de compostos.Existem vários estudos pré-clínicos e clínicos que relatam um efeito sinérgico entre os compostos dadores de NO e as terapêuticas oncológicas.Os dadores de NO funcionam como agentes sensibilizantes em células rádio-ou quimiorresistentes, diminuindo os níveis de hipoxia através da regulação da expressão de HIF-1 e, consequentemente, aumentando a oxigenação do tumor, o que permite uma maior eficácia da radioterapia e quimioterapia pela geração de ROS associada a efeitos citotóxicos (3).Um estudo de Matthews et al (10) demonstra esta premissa, onde a relação entre hipoxia tumoral, níveis endógenos de NO e quimiossensibilidade de linhas celulares humanas e murínicas foi verificada.Nesse estudo utilizaram-se duas linhas celulares, a linha MDA-MB-231 de cancro de mama humano e a linha de melanoma murínico B16F10, com exposição a diferentes níveis de O 2 e incubação com inibidores de NO na presença de doxorrubicina ou 5-fluorouracilo.Observou-se que, sob condições de hipoxia e baixos níveis de NO, a resistência a fármacos quimioterápicos aumentou, traduzindo-se numa maior sobrevivência das linhas celulares tumorais.No entanto, com a adição de dadores de NO, mesmo em pequenas doses, este efeito foi revertido (10).Outros estudos in vitro em linhas celulares de cancro da próstata e num modelo tridimensional de cancro de mama descrevem o efeito dos dadores de NO ao nível da hipoxia e quimiossensibilidade (11)(12)(13).Um mecanismo alternativo de dadores de NO em combinação com agentes antitumorais foi relatado por Chegaev et al (14), onde se observou que os dadores de NO impediram a ativação de proteínas associadas à resistência a fármacos, como a glicoproteína-P, na linha celular L1 and L2 with chemotherapy or radiotherapy should be also considered in future studies.NO donors may also influence cell invasiveness, as can be seen in a study by Postovit et al. (15).Using the MDA-MB-231 cell line, these authors found low doses of NO donors inhibited the hypoxia-mediated increase in cell invasion (15).The same research group performed an in vivo study with a murine model of melanoma, demonstrating that NO donors reverse the proliferation increase in metastatic nodules, even under conditions of hypoxia (16).The study of the impact of L1 and L2 in cell migration and invasion may also shed light on the potential usefulness of these compounds in cancer therapy.Despite the well-documented potential usefulness of NO donors in cancer therapy, L1 and L2 did not exhibit relevant cytotoxic effects under the conditions tested.Taking into account the relation between the effects of NO-donors and hypoxia, in vitro studies in hypoxic conditions should also be carried out.In addition, other mechanisms beyond cell viability should be explored to duly assess the therapeutic potential of these compounds. The author declares that there is no personal or financial relationship that can be understood as presenting a potential conflict of interest.
2019-01-02T01:47:05.238Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "11375410f46797697d4fec8c12a67c8324281fec", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.19277/bbr.14.2.159", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "11375410f46797697d4fec8c12a67c8324281fec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
20539579
pes2o/s2orc
v3-fos-license
Interpreting linguistically quantified propositions We discuss the idea of a linguistic quantifier and fuzzy set representations of these objects. We describe two formalisms for evaluating the truth of linguistically quantified propositions such as Most winter days are cold. the first approach is based upon a probabilistic interpretation and the second is based upon a logical interpretation, and uses a generalization of the “and” and “or” operations via OWA operators. We suggest an application of these quantified statements for the representation of the quotient operator in fuzzy relational data bases. © 1994 John Wiley & Sons, Inc. I. INTRODUCTION Classical logic is restricted to the use of the two quantifiers, there exists andfor all. These quantifiers are closely related respectively to the or and and connectives. Human discourse is much richer and more diverse in its use of quantifiers. Any attempt at modeling human reasoning must come to grips with other linguistic quantifiers such as most, some, at least hav. Zadeh' denoted these quantifiers as linguistic quantifiers and suggested that the semantics of the various different linguistic quantifiers can be captured by using fuzzy sets to define a quantifier. In this work we are concerned with the issue of evaluating the truth of propositions containing linguistic quantifiers. A fundamental duality exists in the interpreting of these propositions. On one hand we can view a quantifier as a kind of imprecise probability. On the other hand we view a quantifier a kind of connective lying between the and and or.* These two views manifest themselves in two formulations for the evaluation of propositions such as Most X ' s are B , The first approach which is probabilistic in spirit was suggested by Zadeh.'s3 The second approach introduced by Yager2,4qs and based upon use of his Ordered Weighted Averaging (OWA) operators2 is logical in spirit. databases by enabling us to define a fuzzy quotient operator. We also show how quantified propositions can play a role in relational LINGUISTIC QUANTIFIERS Quantifiers can be used to represent the amount of items satisfying a given predicate. Classic logic allows for the use of only two quantifiers, for ull and there exisrs (not none). These two quantifiers are respectively called the universal, V, and existential, 3, quantifiers. In an attempt to bridge the gap between formal systems and natural discourse, with its large array of quantifiers, and in turn provide a more flexible knowledge representation tool, Zadeh6 introduced the concept of linguistic quantifiers. In addition to two classic logic quantifiers, for all and there exists, linguistic quantifiers are exemplified by terms such as most, a.fiw, ubout 5, at least half. Central to Zadeh's formulation of a theory of linguistic quantifiers is his representation of these quantifiers as fuzzy subsets. In developing his theory Zadeh distinguished between two types of linguistic quantifiers, absolute and proportional. Absolute quantifiers are used to represent amounts that are absolute in nature such as about 5 or more than 20. These absolute linguistic quantifiers are closely related to the concept of the count or number of elements. Zadeh suggested that these absolute quantifiers can be represented as fuzzy subsets of the nonnegative real numbers, R+. In this approach an absolute quantifies can be represented by a fuzzy subset Q, such that for any r E R + the membership grade of r in Q , Q ( r ) , indicates the degree to which the amount r is compatible with the quantifier represented by Q. Proportional quantifiers, such as at least half or most, can be represented by fuzzy subsets of the unit interval, I . For any r E I , Q<r> indicates the degree to which the proportion r is compatible with the meaning of the quantifier it is representing. A close relationship exists between these proportional quantifiers and probability values, especially linguistic probability values. It is generally assumed that the linguistic quantifiers encountered, absolute and proportional, are normal, there exists at least one element in its base set having membership grade one. Furthermore, we shall call a quantifier regular if in addition to being normal there exists at least one value having membership grade zero. While we have distinguished between the formal representation of absolute and proportional quantifiers in actual usage many of same natural language terms are used to characterize both absolute and proportional quantifiers. Thus the quantifier most can be represented as a proportional quantifier or given the cardinality of the elements considered we can represent it as an absolute LINGUISTICALLY QUANTIFIED PROPOSITIONS 543 quantifier. Similarly the linguistic entityfew can also be used to indicate either an absolute or proportional linguistic quantity. Yager7v8 has investigated a number of issues related to linguistic quantifiers. In Ref. 9 he provides a survey of some of their applications. Dubois and Prade]" have also contributed to the development of these objects. In Figure I we illustrate a number of linguistic quantifiers. YAGER Functionally, linguistic quantifiers are usually of one of three types, increasing, decreasing, and unimodal. An increasing type quantifier is characterized by the relationship These quantifiers are characterized by values such as at least a, all, most. A decreasing type quantifier is characterized by the relationship The quantifiers characterize terms such as afew, utmost a. Unimodal quantifiers have the property that These are useful for representing terms like ahout q . In many cases, dual relationships exists between increasing and decreasing quantifiers, especially for proportional quantifiers. Assume Q, and Q2 are quantifiers defined on the unit interval. If Q , is an increasing quantifier then Q2 defined such that Qdr) = Qi(1 -r> is called its dual and is a decreasing type quantifier. From the previous figures we can see that most and f e w can be reviewed as duals. A concept that plays a significant role in the manipulation of these quantifiers is the idea of cardinality of a fuzzy subset. Assume A is a fuzzy subset of X , the sigma count of A denoted Xount(A) is defined as the sum of the membership grades in A , This concept is closely related to the power of a fuzzy subset introduced by DeLuca and Termini." [XI x2 x3 X ' J A second concept introduced by Zadeh is the relative count (or cardinality). If A and B are two fuzzy subsets then the relative sigma count of A to B is defined XCount(A n B ) If we assume that the intersection is defined by the min operator then LINGUISTICALLY QUANTIFIED PROPOSITIONS 545 The Xount(A IB) is essentially the proportion of elements in B that are in A . It is closely related to the conditional probability. Some important identities can be established regarding these sigma counts are: As we have discussed, one significant use of these natural language quantifiers is to indicate the quantity of elements satisfying a predicate. Statements of the form most dogs are easily trained few educated people are arrested usually rain causes delays are prototypes of the use of these quantifiers. Formally we can consider two classes of sentences of the above type. Assume Q is a linguistic quantifier, X is a set of elements, and A and B are fuzzy predicates (subsets) defined on X . One class of sentences, denoted type I , are of the form QX's are A (I) A second class of sentences, denoted type 11, are of the form QB's are A . (11). Besides the use of these quantifiers in natural language statements of the type described above there exists a category of applications of the above formalism developed by Yager2,5%9 which he calls "quantifier guided aggregation" (QGA). QGA has many applications in the development of intelligent decision systems. In the following we shall describe two prototypical applications of QGA, one in information fusion and the other from multicriteria decision making. Let X be a collection of different pieces of evidence and let Y be a set of conjectures or hypotheses upon which this evidence may bear. Let y E Y be one particular conjecture. Furthermore let A, be a fuzzy subset over the set of evidence, X , such that for each x E X , A J x ) indicates the degree to which evidence x is compatible with conjecture y. If Q is used to indicate a linguistic quantifier, such as most, then the proposition "most evidence is compatible with y" can be represented by the formal structure QX's are A,. YAGER In this information fusion environment the truth of the above proposition can provide a measure of how good is the conjecture y in the light of the evidence set X . Another area of application of this quantifier guided aggregation technique is in the domain of multicriteria decision making (MCDM). In MCDM the set X is used to indicate the collection of criteria and goals desired by the decision maker. We let D be a collection of alternative potential solutions and d E D be any particular solution. In this case we use A, to indicate a fuzzy subset over the set of criteria X , such that for each criteria x E X , A d ( x ) indicates the degree to which alternative d satisfies the criteria x. Again using Q as some linguistic quantifier such as most then the proposition In this case A, is a fuzzy subset of X , the collection of criteria, indicating the predicate satisfied by d. The truth of the above proposition can be used to indicate the degree to which d is a good solution. An extension of the usefulness of this approach in MCDM problems can be obtained if we let B be a fuzzy subset of the criteria, X , such that B ( x ) indicates the importance of criteria x. Thus we can represent the proposition In this case the truth of this type (11) sentence would indicate the degree to which d satisfies Q of the important criteria. The use of this kind of quantifier guided aggregation can be easily extended to pattern recognition, medical diagnosis, learning, and information retrieval. We note Kacprzyk has used these ideas in among other ways consensus formulation and planning.'2, '3 It should be pointed out that in the applications discussed above the quantifiers used are generally of the increasing type. This is due to the fact that the more criteria (evidence) satisfied the better. In using the formalisms Q X s are A and QB's are A a problem of considerable interest becomes that of trying to determine the truth of such a proposition for a given Q, A , X , and perhaps B. As we indicated in quantified guided aggregated applications this truth becomes a measure of validity of a conjecture while in the MCDM application it can be used to determine the goodness of a proposed alternative. In Ref. 1 Zadeh suggested an approach to establishing the truth of the propositions I and 11. There exists a minor difference depending on whether the quantifier is absolute or relative. In the following we shall initially assume that Q is an absolute quantifier. For the proposition QX's are A's the truth value 3 is obtained as follows (1) Calculate In the case of the proposition QB's are A ' s the truth value is obtained as follows We note that here we are essentially interpreting QB's are A's as QX's are ( A andB)'s. If Q is aproportional quantifier then Zadeh suggests for the proposition QX's are A we obtain 3 as (1) Calculate (2) 3 = Q(r) Finally for the case where we have Q relative and a type I1 proposition, Let A be the fuzzy subset On the other hand they can be viewed as a kind of linguistic probability, this view is especially useful for the proportional quantifiers. A thoughtful review of the previous section indicates that in providing an operational semantics for the type I and type I1 statements Zadeh relied very heavily on the probabilistic interpretation of these quantifiers. While in many cases this works well it leads to some difficulties. At the heart of most of the problems with this probabilistic-like interpretation is the simple additive accumulation of membership in terms of the sigma count. This additive approach leads to a loss of uniqueness of the individual elements. A particularly glaring breakdown of this approach is the following. Assume X consists of 10 elements. Consider the quantifier, Q , at least one where Consider the proposition At least one X ' s are A . Assume A is a fuzzy subset such that A ( x ) = .1 for all x. In this case while no element in X satisfies A , we get r = ZA(xi) = I and hence Q(1) = 1. Thus the statement is evaluated as true when indeed it is far from true. Not unaware of these problems with the approach he suggested, Zadeh introduced the idea of FE count" as an alternative means for evaluating the propositions I and 11. The FE count while dealing with the above problem brought along other difficulties. In particular the FE count requires the introduction of fuzzy counts which greatly complicate the calculations. In Ref. 2 Yager introduced the concept of a weighted ordered averaging (OWA) operator. As we shall subsequently see these operators can be used to provide a semantics for the interpretation of these quantified statements which has very much the flavor of extension of the binary quantifiers rather then the probabilistic flavor of Zadeh's approach and avoids the difficulty just pointed out. We note that the approach presented is very compatible with the FE count approach but avoids the introduction of fuzzy values. A number of the issues involved here are very closely tied in with the work being done on measures for conditional objects.I6 The OWA operators introduced by Yager provide a family of aggregation operators which have the and operator at one extreme and the or operator at the other extreme. DEFINITION Another important characteristic of these operators are their idempotency , a, a , . . . , a ) = a . In Ref. 2 Yager shows the close relationship between the OWA operators and the linguistic quantifiers. In particular he suggested a methodology for associating each regular monotone nondecreasing linguistic quantifier with an OWA operator. Assume Q is a monotone proportional regular linguistic quantifier. Let F be an OWA operator of dimension n. We say that F is determined from Q if the weights associated with F are obtained in the following manner, for all i w, = Q(i/n) -Q(il / n ) . Figure 2 illustrates the process of obtaining the weights of the associated OWA operator from a given quantifier. Since Q is monotone it naturally follows that w, 2 0 for all i. Furthermore since Q is assumed regular, Q(0) = 0 and Q(1) = I then it can easily shown that Z,w, = 1. YAGER Given a proposition Q X s are A where Q is a linguistic quantifier we can use the OWA operators to calculate the truth of this proposition. We first associate with Q an OWA operator F whose weighting vector W is of dimension, n, where n is the cardinality of X. We then use the suggested method to calculate these weights, We then use these weights to make an OWA aggregation of the A(xi), 3* = F (A(x,), . . . ,A(xn)). If we used the original method suggested by Zadeh we would get 3 = Q(CiA(xi)). The following example illustrates the use of this approach. Example We see that the result here is close to that obtained in the earlier example. Earlier we provided an example which clearly showed the shortcomings of the approach based upon the sigma count. In the following example we apply the OWA approach to this example and clearly see its performance as an extension of the binary quantifier. Example. Consider the proposition there exists at least one X that is A . Let X = {x,, x2, . . . , x,} and let A be the fuzzy subset of X such that Thus in this case with n = 10 we get Thus the truth of the above proposition is . 1 which is more intuitively appealing. While in most cases the two approaches give different results the following theorem shows the equivalence of the two approaches in the case when the subset A is crisp. THEOREM. Let Q be a monotone regular linguistic quantifier. Let F be an OWA operator of dimension n whose weights are determined from Q. Let However since wj = Q ( j / n ) -Q ( jl/n) then but Q(0) = 0 hence While the two approaches are equivalent in the special case when the arguments are binary, 0 or 1 they give different results in the case where the arguments are drawn from the unit interval. There exists at least one quantifier for which both approaches always give the same result. 11. Let F be the OWA operator whose weights are determined from Q. Then for any argument (a,, . . . , a,, Thus for this quantifier, which lies midway between the and and the or, the two approaches are equivalent. However, as the following theorem show this is not the case for the extreme quantifiers. THEOREM. Assume Q is the quant8er defined by Q(r> = r for r E [U, THEOREM. Assume Q is the universal quantifier for all. Let F be the 0 W A operator determined from Q. Then for any argument (a,, . . . a,) 5 " where 3* is the evaluation using the OWA method and 3 is the evaluation under the probability-like method. Proof. We recall in this case that Q is defined such that: (a,, . . . , a,) 3 2 3*. 3. THEOREM. Assume Q is the existential quant$er, there exists. Let F be 555 Proof. As we have indicated the OWA operator provides for aggregations lying between the and and the or operator. In Ref. 2 Yager introduced a measure of "orness" associated with an OWA operator. Assume F is an OWA aggregation with weights wi then It can be shown that Thus given a set of weights associated with the OWA operator F the above measures how much like an or operator this aggregation is performing. In Ref. 2 Yager introduced a second measure associated with an OWA operator. This measure called the dispersion of F is denoted disp( F ) and defined disp(F) = -&wj In wj. This function measures how distributed the allowed sum of one is between the weights. This measure is an entropy like measure. Disp(F) can be seen to measure how effectively the aggregation process is using the information available to it as arguments. For example, when w 1 = 1, disp(F) = 0, the aggregation is based solely on one piece of data and thus very little of the information in the argument is used. On the other hand when w i = l/n for all i then disp(F) is maximum and the aggregation process is effectively using all the data provided in the argument. In Refs. 17-19 O'Hagan suggested using these two measures to obtain the weights for an OWA aggregation. The method introduced by O'Hagan is as follows. We first select a measure of "orness", a, which we desire to guide the aggregation. We then find the weights w,, w2, . . . , w, which satisfy the following mathematical programming problem: Maximize -&wi ln(w,) (Disp) 556 Subject to: 11. Using the above programming approach we can see that every a gives us a set of weights corresponding to those that we would get from a quantifier. In particular an a can be seen as being equivalent to a quantifier denoted Q,. We can now consider statements of the form QJ's are A . In the above we mean to indicate that the aggregation is guided by the weights obtained from solving the above MP problem with a. We note that a essentially determines the degree of "orness" of the antecedents. IV. TYPE I1 STATEMENTS As we indicated linguistic quantifiers play a significant role in the evaluation of the truth of propositions of the form QX's are A (I) and QD's are A . (11) In the following we shall assume A and D are fuzzy subsets of X and Q is a monotone proportional quantifier. In this section we shall suggest a methodology for evaluating the truth of type I1 statements using an OWA aggregation. We recall that in solving the type I problem we obtained the weights from Q as where n is the cardinality of X . Then we obtain 3 , the truth of proposition I as where bj is thejth largest of the A(xi)'s. In the following we shall suggest an approach in the spirit of the OWA technology for evaluating statements of type 11, QD's are A . The procedure is going to be in format the same as for type I. Obtain the weights for Q to get an OWA operator then use these weights for the aggregation. However, the procedure used to get the weights from the quantifier Q is going to be different. One key distinction between this approach and that used in handling type I 557 propositions is that in determining the weights from Q we don't divide the unit interval into n equal size parts. Instead we divide the interval into n not necessarily equal parts. The size of each of the parts is determined by D. Thus in this case the weights used in the OWA operator are affected by D. In the following we describe the technique for getting the weights from Q given a D . We shall let dj = D(xi) and let ej be thejth smallest of the di's, thus the ej's are ordered such that el 5 e2 5 e3, . . . , 5 e,. In addition x i d j = x j e j . We shall denote the sum as d . We define the weights of the associated OWA operator as where Sj = -c ej. Alternatively we can express Sj in a recursive form as s, = 0 Some properties of this procedure should be noted: (1) Since Sj 2 Si-l for all i then with Q monotone wi 2 0. (3) Since So = 0 then with Q(1) = 1 we are assured that xiwi = 1. We shall now look at the evaluation of these weights in some special cases. Example. Assume D = X, D(xi) = 1 for each i. In this case d = n and ej = 1 thus Si = l/n. From this we see that w j = Q(: ) -p ( q ) . Thus we see that our new procedure for obtaining the weights is an extension of the old procedure since this regime reduces to the old method when D = X . Example. Assume Q is the linguistic quantifier defined by Q(r) = r (see Fig. 3). and therefore We now look at the special case when Q is the universal quantifier, "for all". We recall that in this case We have proved that for any non-null subset D, the weights associated with the quantifier for all are always zero except for w, which takes the value I. The situation with respect to the existential quantifier is a little more subtle. Let Q be the fuzzy subset indicating the existential quantifier. The existential quantifier is defined by the following - The question is where to put the value a. A natural place to put a is let a = 1 -where n is the cardinality of X . However this choice has some problems n especially when D has many elements but only a few have non-null membership. We suggest that we allow a = -where d = x i e i . Having discussed in detail the procedure for obtaining the OWA weights to be used in the evaluation process we now can look at our approach to the evaluation of proposition of the form QD's are A where D and A are fuzzy subsets of the set X and Q is a regular quantifier. (1) Use Q and D and the methodology previously described to obtain the weights of the relevant OWA operator. We denote the weighting vector associated with this operator as W ( Q / D ) . ( ) For each xi calculate ci = D(xi) V A ( x i ) = Max[l -D ( x i ) , A i ( x ) ] (3) Evaluate the associated OWA function to obtain 3, The following example illustrates the procedure described above. Example. Evaluate In some regards we see the spirit of the approach suggested is to transform the proposition QD's are A's into " Q I D" X ' s are v A. QD's are In this case "QlD" is essentially like a new quantifier obtained from Q and D . We shall denote "QlD" as Q module D . In using the OWA approach we see that the truth of the proposition, 3 , is constructed as where bj is the j t h largest a; v a; and wj are the elements of the vector W ( Q , D ) . The following theorem shows us that elements not in D play no role in confirming the truth of the proposition QD's are A . THEOREM. Any element x* for which D(x*) = 0 makes no contribution to 9 Proof. As we have already shown if there exists some element x* for which D(x*) = 0 then w1 = 0. Furthermore From this we see To get a feel for what is happening in this approach we shall consider the linear quantifier Q(r) = r. Furthermore we shall assume that the d;s are already ordered, in ascending order. In this case wi = di/d and ci = v ai. We recall the bis are the ordered collection of the ci's in decreasing order. We first recall the low valued di's tend to have high zi's and thus appear to correspond to low This approximation closely resembles the result which would be obtained using Zadeh's method. It also suggests an additional method for calculating the truth of the proposition QD's are A which combines some features of both approaches. Let cj be equal to the (zi V a,) ordered in descending order, i > k implies ci < ck, and let pj be the 1 ordered in ascending order, 1 > k implies pI > P k . Let r = x j c j * pj and then 3 = Q(r). V. QUOTIENT OPERATOR IN FUZZY DATABASES In this section we show that we can use some ideas from the evaluation of quantified statements in order to implement a quotient operator in fuzzy relational We first recall that in database theory a relation is a table that is character- That is T is a set of objects, t , such that for every object y in S if we adjoin t with y , ( t y ) , we get an object in R . The value in the a! column indicate the degree to which an element belongs to the relation SKILLS. Alternatively a! can be viewed as the degree to which person named has the associated skill type. In the following we shall have need for the following concepts. Assume R is a fuzzy relation on the scheme F . Let E be a subscheme of F. We define the projection of R onto E , denoted Proj,R as the fuzzy relation on E obtained by deleting the columns F-E and for any duplicates we take the maximum membership grade. We define the crisp projection of R onto E , denoted IProj,R I as the support of Proj,R, that is we just eliminate the membership column from Proj,R. In the following we shall define the fuzzy relational database quotient operation. As we shall see the ordinary quotient operation is a special case of this more general operation. Assume R is a fuzzy relational database on the scheme F . Let S also be a fuzzy relational database on the scheme B . We assume B is a subscheme of F , B C F and again denote E = F -B . Let Q be a regular linguistic quantifier. We define R + S Mod Q as a fuzzy relation T defined as follows. For each element u contained in the crisp projection of R onto, u E IProj,RI, we obtain its membership grade in T , T(u), as the degree of truth of the proposition. For Q elements y in S , (u, y ) is in R . In order to formally implement this operation we proceed as follows: Let R(u, *) be the fuzzy subrelation ofR consisting of those elements whose attribute value for E is u , it is the selection of R with E = u. Let R: be the projection of R(u, *) onto B . Then T(u) is the truth of the proposition. Q S ' s are R: . In the following example we show that the approach suggested is an extension of the usual quotient operator by looking at the crisp example originally used to introduce this operation. Exumple. We desire the people who have all the skills in S. (1) Find the people in R Name Jean Barbara Debbie Tina (2) For each element obtained above find R : . Skill type cuJean aBarbara aDebbie aTina The following example illustrates the use of this procedure in a fuzzy Example. Consider the fuzzy relation R described previously. Let S be a environment. fuzzy relation on the frame, skill type, signifying requiring manual dexterity. S: Skill type a I 1 .2 IV .o --Thus for each skill type a indicates the degree to which that skill requires manual dexterity. Consider the query5nd the people who have most of the skills that require manual dexterity?. For our purposes we shall define most simply as, Q(r) = r. The following steps implement this procedure. VI. CONCLUSION We have looked at the representation of linguistically quantified propositions via two interpretations: a probabilistic one and a logical one. We have shown how the OWA operator can be used to implement the logical interpretation. One development of this article was the extension of the OWA approach to the case where both predicates in the proposition QA's are B's are fuzzy.
2018-01-23T22:41:51.671Z
1994-01-01T00:00:00.000
{ "year": 1994, "sha1": "7bf24d6db64d2120d1015022ae68d7371b15b4fa", "oa_license": null, "oa_url": "https://doi.org/10.1002/int.4550090604", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "5e11e8825223ed12e3e4e6cd1f665309d393f947", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
239025032
pes2o/s2orc
v3-fos-license
Prognostic Role of Monocytic Myeloid-Derived Suppressor Cells in Advanced Non-Small-Cell Lung Cancer: Relation to Different Hematologic Indices Methods We recruited 40 cases of advanced NSCLC, stages III and IV, aged > 18–<70 years old, and eligible to receive chemotherapy with or without radiotherapy, along with 20 healthy controls of comparable age and sex; after diagnosis and staging of patients, blood samples were collected for flow cytometric detection of Mo-MDSCs. Results Significant accumulation of Mo-MDSCs in patients compared to their controls (p < 0.0001). Furthermore, these cells accumulated significantly in stage IV compared to stage III (p = 0.006) and correlated negatively with overall survival (r = −0.471, p = 0.002), lymphocyte to monocyte ratio (r = −0.446, p = 0.004), and mean platelet volume to platelet count ratio (MPV/PC) (r = −0.464, p = 0.003), patients with Mo‐MDSCs < 13% had significantly better survival than those with Mo‐MDSCs ≥ 13% (p = 0.041). Conclusion Mo-MDSCs represent one of the key mechanisms in the immunosuppressive tumor microenvironment (TME) to play major roles not only in the carcinogenesis of lung cancer but also in disease progression and prognosis and, in addition, predict the efficacy of immune checkpoint inhibitors; our results provided some support to target Mo-MDSCs and needed to be augmented by further studies. Introduction Globally, lung cancer is one of the leading causes of cancerrelated death [1][2][3]. Although non-small-cell lung cancer (NSCLC) immunotherapy has fortunately emerged as a relatively promising area of research, immune checkpoint inhibitors have found an influential lantern for NSCLC patients. However, much work remains to elucidate lung tumor immunobiology and how alternative tumor microenvironments (TME) can affect patient survival across different NSCLC subtypes [4]. Recently, studies focused on TME and its role in tumor resistance; tumor suppressor cells within TME, namely, myeloid-derived suppressor cells (MDSCs), greatly attenuate the tumor response to chemotherapy and regrettably immunotherapy and subsequently affect NSCLC prognosis [5]. MDSCs encompass a range of immature cells whose unifying characteristics are their myeloid origin and ability to suppress T cell activation and T cell function. Phenotypically, these cells are defined by several markers; none of them is characteristic of MDSCs; the CD11b marker is expressed by all MDSCs [6]; there are two major subtypes of MDSCs; monocytic MDSCs express CD14, and polymorphonuclear MDSCs express CD15 and CD66b; both types express CD33 in addition to CD11b with the absence of HLA-DR. Growth factors controlling myelopoiesis could induce the accumulation and augment the suppressive activity of MDSCs, including GM-CSF and G-CSF in cancer patients [7]. C/EBPβ transcription factor, which is known to control emergency myelopoiesis, is expressed in chronic inflammation in many solid tumors and different inflammatory conditions, including infection, autoimmunity, obesity, and stress. These conditions have led to the hypothesis that chronic inflammation is a mechanism that increases the risk of cancer and tumor progression by acting as a driving force for MDSCs and subsequently suppressing antitumor immunity [8]. Vascular endothelial growth factor (VEGF), upregulated by hypoxia-inducible factor-1 in hypoxic TME, supports tumor progression through neovascularization. Studies done in NSCLC explored that VEGF attracts MDSCs to the tumor site and further promotes tumor progression [9]. MDSCs suppress both innate and adaptive immunity through cell to cell contact with their components, with T cells through sequestering the essential amino acids, cysteine, important for T cell activation. Furthermore, they downregulate the production of macrophage production of IL-12 favoring the development of tumor-promoting macrophage phenotype. In addition, they inhibit NK-mediated tumor cell lysis and recruit Tregs into the tumor site [15,16]. Moreover, MDSCs downregulate L-selectin on circulating naïve T cells, therefore suppressing T cell activation [17]. Additionally, MDSCs may contribute to carcinogenesis and tumor progression through nonimmunosuppressive mechanisms. Immature myeloid cells directly contribute to skin tumor development by recruiting IL-17-producing CD4+ T cells [18]. In addition, MDSCs endow stem-like quality to breast cancer cells through IL6/STAT3 and NO/NOTCH cross-talk signaling [19]. Moreover, MDSCs could enhance the stemness of cancer cells by inducing microRNA101 and suppressing the corepressor gene Cterminal-binding protein-2 [20]. Huang et al. [21] reported that Mo-MDSCs significantly increased in the peripheral blood of patients with NSCLC compared to healthy controls and correlated with worse prognosis. The current study is aimed at providing the predictive and prognostic role of Mo-MDSCs in advanced NSCLC relating them to different hematologic indices. Patients and Methods This study was a case-controlled study carried out at South Egypt Cancer Institute and Assiut University Hospital and approved by the ethical committee of Assiut University (approval ID no. 17300417). Informed consent was taken from all study participants. The study was conducted in accordance with the Declaration of Helsinki. All experiments were performed in accordance with relevant guidelines and regulations. We recruited 40 cases of advanced NSCLC, stages III and IV, aged > 18 -<70 years old, and eligible to receive chemotherapy with or without radiotherapy, along with 20 healthy controls of comparable age and sex. Informed consent in written form was taken from all participants. The study objectives were explained to the participants, and then, blood samples were collected by sterilized and safe maneuvers. We excluded patients with early stages, pretreated patients with chemotherapy, and patients with concurrent excruciating infection. After diagnosis and staging of patients and before the start of any line of treatment, blood samples were collected for flow cytometric detection of Mo-MDSCs. Systemic chemotherapy was the treatment commonly received in the form of platinum doublets (carboplatin or cisplatin plus either pemetrexed, paclitaxel, gemcitabine, or vinorelbine); some patients especially those with ECOG-PS3 received single agent chemotherapy, while in patients with stage III, concurrent chemoradiation with a 3-dimensional conformal radiotherapy was received after induction chemotherapy. Lymphocyte to monocyte ratio (LMR) was calculated by dividing the absolute lymphocytic count by the absolute monocytic count of the peripheral blood. Mean platelet volume to platelet count ratio (MPV/PC) was calculated by dividing the mean platelet volume by total platelet count in μl of peripheral blood. Statistics. The Shapiro-Wilk test was used to detect normality of our data; all data were normally distributed except percentage monocytes, Mo-MDSCs, absolute monocytic count, absolute lymphocytic count, MPV/PC ratio, and age with p values < 0.001, <0.001, 0.002, 0.04, 0.036, and 0.021, respectively, for whom nonparametric tests were applied. Descriptive statistics including percentages, mean, median, and standard error were used and inferential statistics to determine the significance of data, including independent sample t-test, Mann-Whitney U test, Kruskal-Wallis test, and chi-square test. ROC curve was used to find a cutoff value for Mo-MDSCs, Spearman rho correlation was used to determine the degree of association between scale variables, and all data were analyzed using SPSS version 26 and considered significant at p value < 0.05. Overall survival (OS) was calculated from the time of diagnosis to time of death or last follow-up recorded in patients' files. Accumulation of Mo-MDSCs in NSCLC Patients. At first, our results elucidated a significant accumulation of Mo-MDSCs in NSCLC patients compared with their comparable healthy controls (p < 0:0001) ( Table 1). The mean age of the study patients was 59.5 years with male to female ratio of 1.7 : 1; although smoking was established as a risk factor for lung cancer, 72.5% of the study patients were either never smokers or past smokers. ECOG-PS is one of the two most commonly used performance scales; considering that NSCLC is a debilitating disease and commonly manifested at a later stage, poor performance status was evident in our study (70% of the patients had ECOG-PS > 1); adenocarcinoma was the commonest pathologic type expressed in 57.5% of the patients, and as expected, stage IV was evident in 42.5% of the patients; the rest of the characteristics are illustrated in Table 2. As expected, the significant accumulation of Mo-MDSCs in stage IV than stage III confirms the possible role of these cells in disease progression and indirectly referred to the role of immune-mediated destruction of tumor cells; however, these cells did not exhibit any significant change with other clinical characteristics (Table 3). Correlations between Mo-MDSCs and Overall Survival. The mean Mo-MDSC percentage expressed negative correlations with OS, MPV/PC ratio, LMR, and with the number of cycles of chemotherapy received (Table 4); further analysis demonstrated that a negative correlation between OS and Mo-MDSCs was clearly apparent in males but not in females ( Figure 2). Eight out of nine, 10/14, and 1/17 patients with stages IIIA, IIIB, and IV, respectively, achieved more than one-year OS compared to 1/9, 4/14, and 16/17 patients of the previous stages that had lower than one-year OS and the results were significant (p < 0:0001); additionally, the mean percentage of Mo-MDSCs for those with more than one-year survival was 3 Journal of Immunology Research 13.01 compared with 14.79 for those with lower than one-year survival (p = 0:021, Figure 3). Multiple Linear Regression Test for Different Predictors of OS. Multiple linear regression was run to predict OS from 8 predictors found to significantly affect the mean OS including age, sex, smoking, stage, performance status, Mo-MDSCs, MPV/PC ratio, and LMR; these prementioned variables collectively predicted OS with significant impact (Fð8, 31Þ = 14:230, p < 0:0001, R 2 = 0:786); however, only stage and LMR added significantly to the prediction of OS. Looking at Mo-MDSCs, B = 0:018 and p = 0:9 denoted that only 1.8% of the change in OS variance was attributed to Mo-MDSC change when all remaining predictors were held constant. Furthermore, it was misleading as for each one of the percentage increase in Mo-MDSCs, there was an increase in OS by 0.018 months; in addition, it was not significant ( Table 6). Discussion Nowadays, it is well established that the tumor microenvironment and immune system play a crucial role in the initiation and progression of different cancers, including NSCLC [22]. MDSCs are considered the major suppressor of the immune system interfering with both innate and adaptive Journal of Immunology Research immune responses. Mo-MDSCs were detected to be highly expressed in peripheral blood than excised tissues and lymph nodes of NSCLC patients [23]; subsequently, flow cytometric analysis of peripheral blood for detection of these cells is a reliable method. In the current study, Mo-MDSCs were significantly more prevalent in the peripheral blood of NSCLC patients than healthy controls. Furthermore, increased levels of these cells were associated with poor prognostic features, including advanced stage, low LMR, low MPV/PC ratio, and poor overall survival. Mo-MSDCs were reported to produce TGF-β in the peripheral blood [24]; furthermore, TGF-β was known to induce immunosuppression and promote angiogenesis in TME; in another study, this cytokine was produced by Mo-MDSCs in all tissues [23]. Journal of Immunology Research Several studies demonstrated that NSCLC was associated with high levels of Mo-MDSCs which in turn were responsible for the resistance of this tumor to different systemic therapies [21,[25][26][27]. Yamauchi et al. [28] showed that a significant increase in the percentage of circulating Mo-MDSCs was observed in patients with resectable non-small-cell lung cancer compared with healthy donors; in addition, preoperative levels of Mo-MDSCs predicted recurrence-free survival after surgery. In accordance with the previously mentioned studies, our results depicted significantly increased levels of Mo-MDSCs in patients compared with healthy controls in addition to an association of these cells with poor survival in NSCLC patients. Interestingly, immunophenotyping analysis was performed on peripheral blood samples from seven patients with lung cancer unfit for surgery and treated with stereotactic body radiotherapy (SBRT) to evaluate the impact of SBRT on patients' immune cells including Mo-MDSCs and reported a significant decrease in these cells after RT [30] implicating that not only chemotherapy affected the levels of immune system; however, there was no uniform effect of chemotherapy on the peripheral blood percentages or immunosuppressive function of Mo-MDSCs, but three cycles of bevacizumabbased chemotherapy were associated with significantly reduced level of these cells [31]. Our study agreed with the previous one, where the percentage of Mo-MDSCs negatively correlated with the number of cycles of chemotherapy. The percentage of MDSCs in patients with colorectal cancer with LMR ≤ 2:4 was statistically higher than that with LMR > 2:4 (p = 0:012). Those patients with LMR ≤ 2:4 exhibited a statistically lower RFS than those with LMR > 2:4 (p = 0:008) [32]. Likewise, a negative correlation between Figure 5: Differences in myeloid-derived suppressor cells (Mo-MDSCs), lymphocyte to monocyte ratio (LMR), and mean platelet volume to platelet count (MPV/PC) ratio for patients with >12 months of survival compared to patients <12 months of survival, data analyzed by Mann-Whitney test and independent sample t-test. 7 Journal of Immunology Research 2:271, p = 0:0008) [34]; also, low MPV/PC ratio was associated with poor prognostic features including advanced stage and poor performance status in these patients [35]. Our results in turn adhered to the previous studies where LMR and MPV/PC ratio were negatively correlated with OS. The inverse relation between MDSCs and LMR remains to be elucidated; increased peripheral MDSCs contribute to peripheral monocytosis and hence lower LMR; in addition, it is established that peripheral monocytosis has been reported to be associated with poor clinical outcomes [32]; we reported significant monocytosis in NSCLC patients compared to their controls with negative correlation between LMR and Mo-MDSCs in our patients. It is worth mentioning that limited data are available to adequately enforce our finding regarding the negative correlation between Mo-MDSCs and MPV/PC ratio; however, in line with previous studies relating MPV/PC ratio to poor prognosis in NSCLC, subsequently, a low MPV/PC ratio may be correlated with increased Mo-MDSCs. Several limitations exist in the current study, including the small number of enrolled patients, immunohistochemical evaluation of MDSCs in tumor tissues not done, and heterogeneity of the studied patients. Future work is recommended including addition of CD14, CD11b, CD15, and CD66b immunohistochemistry and qPCR of tissue to verify the hypothesis further. Also, data about the proinflammatory cytokines including IL6, IL8, and TNFα are needed to demonstrate the relationship between inflammation factors and MDSCs. In conclusion, Mo-MDSCs represent one of the key mechanisms in immunosuppressive TME to play major roles not only in the carcinogenesis of lung cancer but also in disease progression and prognosis and, in addition, predict the efficacy of immune checkpoint inhibitors; our results Data Availability The data used to support the findings of this study are available from the corresponding authors upon request.
2021-10-15T15:20:08.455Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "bb98883406a1972edc1b9e86821fc50a7fb57345", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jir/2021/3241150.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "083a0bbaf09911c7cd9599f4b0d6d866c7326b5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230660599
pes2o/s2orc
v3-fos-license
Monounsaturated Fatty Acids in Obesity-Related Inflammation Obesity is an important aspect of the metabolic syndrome and is often associated with chronic inflammation. In this context, inflammation of organs participating in energy homeostasis (such as liver, adipose tissue, muscle and pancreas) leads to the recruitment and activation of macrophages, which secrete pro-inflammatory cytokines. Interleukin-1β secretion, sustained C-reactive protein plasma levels and activation of the NLRP3 inflammasome characterize this inflammation. The Stearoyl-CoA desaturase-1 (SCD1) enzyme is a central regulator of lipid metabolism and fat storage. This enzyme catalyzes the generation of monounsaturated fatty acids (MUFAs)—major components of triglycerides stored in lipid droplets—from saturated fatty acid (SFA) substrates. In this review, we describe the molecular effects of specific classes of fatty acids (saturated and unsaturated) to better understand the impact of different diets (Western versus Mediterranean) on inflammation in a metabolic context. Given the beneficial effects of a MUFA-rich Mediterranean diet, we also present the most recent data on the role of SCD1 activity in the modulation of SFA-induced chronic inflammation. Inflammation in the Metabolic Syndrome Obesity is the main factor responsible for the development of the metabolic syndrome, which is characterized by metabolic complications including visceral adiposity, hypertension, high circulating cholesterol and elevated glycemia [1][2][3]. This pathological combination often leads to insulin resistance and type 2 diabetes and is associated with a sustained inflammation profile [4,5]. In North America, people with a body mass index (BMI) superior to 30 are considered obese. This represents approximatively 36% of the population of North America and 13% worldwide [6]. Obesity is characterized by an excessive accumulation of lipids in adipose tissue. This accumulation becomes deleterious when it occurs in visceral fat [7]. In fact, waist circumference (as an indirect measure of visceral fat accumulation) is correlated with the development of specific metabolic disorders including cardiovascular diseases, hypercholesterolemia and type 2 diabetes [8]. When excessive lipid accumulation in adipose tissues occurs, ectopic accumulation (steatosis) appears in other tissues such as liver and muscle [8][9][10]. Saturated adipocytes release free fatty acids into the blood through the action of the Fatty acid translocase (FAT/CD36), the plasmatic Fatty acid binding protein (FABPpm) and the Fatty acid transport proteins (FATPs). These circulating free fatty acids are then captured by other organs, especially the liver and muscle, which gives rise to steatosis [11,12]. Accumulation of long chain fatty acids in non-adipose cells leads to the formation of toxic lipids such as ceramides and cholesterol esters [13]. These lipids induce lipotoxicity, leading to deleterious metabolic consequences including endoplasmic reticulum (ER) stress and inflammation [14,15]. Several population studies reveal that a low-grade and chronic inflammation is often developed in obese patients [16]. This is characterized by increased circulating levels of pro-inflammatory cytokines-especially Interleukin-6 (IL-6)-and of the chemokine MCP-1, both produced by the adipose tissue. Consequently, monocytes are recruited to the adipose tissue, inducing the secretion of other cytokines such as IL-1β and amplifying the inflammatory state [17,18]. In response to elevated cytokines levels, the liver secretes C-reactive protein (CRP), a key marker of inflammation associated with several metabolic diseases including type 2 diabetes and cardiovascular diseases [19][20][21][22]. CRP also aggravates disease development by activating the NF-κB signaling pathway, which is directly implicated in the expression of pro-inflammatory cytokines [23]. The Molecular Mechanisms of Inflammation There are two main types of inflammation: acute and chronic. Acute inflammation appears in response to infections or injuries. This type of inflammation involves polynucleolar neutrophils and is characterized by apparition of swelling and heat around the damaged tissues. Activation of Toll-like receptors (TLRs) triggers the expression of inflammation effectors such as cytokines, prostaglandins, platelet activation factors, inflammasomes complexes, CRP, as well as NF-κB [24]. The resolution of this inflammation requires several conditions: destruction of the cause of inflammation, neutralization of pro-inflammatory markers (cytokines and prostaglandins) and clearance of neutrophils. These events typically occur in a few days, making this type of inflammation transient by nature [25]. The second type of inflammation, chronic inflammation, is sustained over time and is more deleterious for health. It often appears in individuals with poor feeding habits and a sedentary lifestyle, features strongly correlated with obesity development [26,27]. It is also present in different pathologies such as Alzheimer disease and asthma, and in several diseases associated with unbalanced metabolism such as atherosclerosis, cardiovascular diseases and type 2 diabetes [28][29][30][31]. Often named microinflammation or metabolic inflammation, it entails a complex mechanism involving crosstalk between various tissues (such as liver and adipose tissue) across the entire body. In general, this low-grade inflammation appears when cellular stress is recognized by the immune system [32]. Consequently, monocytes are recruited and infiltrate the tissues, becoming macrophages [24]. In inflammatory conditions such as obesity, two distinct macrophage subpopulations can be found in the affected organs. These are associated with different functions. The socalled M1 macrophages display an extreme pro-inflammatory state. They express high levels of pro-inflammatory receptors such as TLRs, Tumor necrosis factor receptors (TNFRs) and Interleukin-1 receptor (IL-1R), and exhibit a powerful activation of the NF-κB transcription factor necessary for the expression of pro-inflammatory cytokines. Conversely, the M2 macrophages are anti-inflammatory and are characterized by a higher expression of the Interleukin-4 receptor (IL-4R) whose activation downregulates inflammatory mediators such as TNF-α and IL-6. They also display an activation of the transcription factors PPARγ and PPARδ, which leads to higher expression of anti-inflammatory cytokines such as IL-10 [33]. The inflammation level present in tissues is therefore dependent on the balance between infiltrated M1 and M2 macrophages. This balance can be modulated by diet and hormonal status and is regulated by the PPARγ transcription factor [34]. A number of potential inflammation triggers have been identified in the context of chronic inflammation. TLR4 is activated by circulating long chain saturated fatty acids [35]. Consequently, the IKK-IκB signaling cascade leads to NF-κB nuclear translocation, where it activates the transcription of several pro-inflammatory cytokines and interleukins [36]. High circulating levels of pro-inflammatory cytokines such as TNF-α, MCP-1, TGF-β and IFN-γ, as well as of interleukins IL-6, IL-1β, IL-18, and IL-8, are observed in patients presenting an inflammatory state [37]. TLR4 activation is also linked to the increased expression of several proteins involved in the formation of inflammasomes, multiprotein complexes responsible for the activation of inflammatory responses. In particular for (NOD-like receptor family, pyrine domain containing 3), an inflammasome complex involved in several diseases associated with chronic and low-grade inflammation [38,39]. NLRP3 is considered an intracellular receptor responsible for the activation of inflammatory responses. Several factors can activate NLRP3 including elevated concentration of intracellular ATP, reactive oxygen species (ROS), mitochondrial oxidized DNA, and lysosomal destabilisation [40]. It can also be activated by low intracellular potassium or high calcium concentrations, which arise in response to cellular stress [40]. As NLRP3 is activated, the caspase 1 subunit of the NLRP3 complex cleaves pro-interleukins into mature IL-1β and IL-18, key circulating markers of low-grade inflammation [41]. NLRP3 is considered as a key factor responsible for the induction and progression of chronic inflammation. In fact, disruption of NLRP3 in adipose tissues decreases the concentration of pro-inflammatory cytokines and restores insulin sensitivity in obese mice [42]. Another mechanism involved in the development of chronic inflammation involves excessive storage of triglyceride (TG) lipids within adipose tissues. Sedentary lifestyles and poor eating habits aggravate this unbalanced TG storage. In mice, excessive TG storage in white adipose tissue (WAT) induces secretion of pro-inflammatory adipokines such as IL-1β, TNFα, MCP-1, and IL-6, triggering systemic metabolic inflammation [43]. In addition, excessive TG storage feeds lipolysis and increases the amount of intracellular and circulating free fatty acids (FFAs) (Figure 1). These fatty acids act as stress-inducing molecules which, captured by TLR4, induce activation of NF-κB and, in turn, induce NLRP3 expression in macrophages ( Figure 1). In addition, intracellular FFAs can impair mitochondria and lysosome integrity, generating ROS ( Figure 1) [44]. FFAs can also inactivate the serine-threonine kinase AMPK, an intracellular energy sensor. In this situation, secretion of IL-1β (via activation of the NLRP3 inflammasome) is increased and leads to lower insulin sensitivity [45]. Several authors even suggest that activation of AMPK can be considered as an anti-inflammatory marker in the context of metabolic inflammation [46,47]. Crosstalk between adipocyte and macrophage leading to enhanced inflammation. FFAs (free fatty acids) produced as a consequence of SFA (saturated fatty acid) overload activate the TLR4 pathway, leading to MCP-1 (Monocyte Figure 1. Crosstalk between adipocyte and macrophage leading to enhanced inflammation. FFAs (free fatty acids) produced as a consequence of SFA (saturated fatty acid) overload activate the TLR4 pathway, leading to MCP-1 (Monocyte chemoattracting protein-1), IL-6 (Interleukin-6) and TNF-α (Tumor necrosis factor alpha) secretion by adipocytes via NF-κB (Nuclear factor-kappa B) nuclear translocation. TNF-α activates TNFR (Tumor necrosis factor receptor) on recruited macrophages which, in combination with the TLR4 pathway, triggers NF-κB nuclear import and production of NLRP3 (NOD-like receptor family, pyrin domain containing 3), pro-IL-1β and pro-IL-18. Lysosomal disruption, as a consequence of ATP (adenosine triphosphate) and ROS (reactive oxygen species) accumulation, triggers NLRP3 activation and results in IL-1β/IL-18 maturation and secretion. This figure was generated with BioRender. Overview of Lipid Metabolism Fatty acid molecules are structurally very diverse and, accordingly, are involved in several different biological functions. For example, phospholipids are an integral part of cell membranes while TGs are mainly involved in energy storage. There are two sources of lipids in the organism: dietary intake and de novo synthesis. In humans, dietary lipids such as cholesterol, TGs, as well as long-chain saturated and unsaturated fatty acids are absorbed in the form of micelles by the intestinal enterocytes. Meanwhile, short and medium-chain fatty acids (2 to 10 carbon chain length) can directly cross enterocyte membranes and reach the bloodstream [48,49]. Enterocytes secrete lipids into the lymphatic and blood circulation in the form of chylomicrons. The liver then captures part of the chylomicrons, using the extracted lipids to assemble very low-density lipoproteins (VLDLs) containing Apolipoprotein B-100 (apoB-100). Secreted, circulating VLDLs transfer their lipids to the rest of the organism, becoming low-density lipoproteins (LDLs) in the process. In parallel to this system, enterocytes and hepatocytes secrete Apolipoprotein A-I (apoA-I) which, in complex with the uncaptured chylomicrons, forms high-density lipoproteins (HDLs) [50]. The main know function of HDLs is to sequester the cholesterol coming from peripheral organs and bring it to the liver [51]. Several mechanisms allow the intake of lipids into cells. Cholesterol is captured via the transmembrane Scavenger Receptor class B type I (SRB1) [52], while TG integrated into lipoproteins are hydrolyzed by the Lipoprotein lipase at the surface of epithelial cells. The FFAs generated are then absorbed by cells through different transporters such as the Fatty acid transport proteins (FATPs) and the Fatty acid translocase (FAT/CD36). The internalized FFAs are rapidly esterified into fatty acid-CoA, which can be then transformed back into TG. This esterification process involves various fatty acyltransferases such as GPAT (Glycerol-3-phosphate acyltransferase) and DGAT (Diacylglycerol O-acyltransferase). Newly formed TG are subsequently integrated into intracellular lipid droplets (LDs) where they are stored [53]. LDs are present in all eukaryotic cells. In normal conditions, lipids are preferentially stored into adipocytes, forming very large LDs. Under conditions where adipocytes are saturated (like obesity), lipids can be stored in other cells such as hepatocytes and myocytes, forming much smaller LDs [54]. This ectopic storage often leads to metabolic disorders and their associated inflammation. The other source of lipids in the organism comes from de novo lipid synthesis, also termed lipogenesis. This process occurs in most cells but, in humans, it principally occurs in hepatocytes ( Figure 2) and adipocytes [55]. Lipogenesis synthesizes long-chain saturated fatty acids (palmitate) from acetyl-CoA generated by glucose hydrolysis. This synthesis is catalysed by the combined actions of Acetyl-CoA carboxylase (ACC) and Fatty acid synthase (FAS). Subsequently, saturated fatty acids (SFAs) are elongated by Fatty acid elongases (ELOVLs) [56] and/or desaturated by Stearoyl-CoA desaturases (SCDs), forming monounsaturated fatty acids (MUFAs) [57]. SCDs are the rate-limiting enzymes of MUFA formation. They are integrated into the ER membrane and are highly regulated by nutritional status and by hormonal regulators of appetite such as insulin [58,59]. SCDs introduce a delta-9 desaturation in SFAs stearate (C18:0) and palmitate (C16:0), forming the MUFAs oleate (C18:1n-9) and palmitoleate (C16:1n-7), respectively. These MUFAs are the main components of TGs (fatty acids that are preferentially stored) [60], cholesterol esters (cellular membrane components, precursors to steroid hormones and biliary acids) [61] and wax esters (compounds preventing evaporative water loss) [62]. They also constitute a large proportion of phospholipids comprising cellular membranes [57]. As such, SCDs are considered key regulators of lipid homeostasis, especially in liver and adipose tissue where lipogenesis is predominant. Modulation of SCD activity has been implicated in the development of the metabolic syndrome and its associated inflammatory state. Therefore, several studies have suggested targeting SCDs in order to treat various aspects of the metabolic syndrome, including type 2 diabetes and cardiovascular diseases [63][64][65]. In humans, there are two SCD isoforms, SCD1 and SCD5. SCD5 is mainly expressed in the brain, while SCD1 is more ubiquitously expressed [66,67]. In mice, the situation is more complex as four isoforms have been characterized (SCD1-4). They all share 85% amino acids homology with human SCD1, while SCD5 appears to be specific to primates. Mouse SCD1 is mainly expressed in lipogenic organs such as liver and adipose tissues. SCD2 is chiefly expressed in the brain, while SCD3 is found in the harderian, preputial and sebaceous glands. SCD4 expression has only be reported in the heart [68][69][70][71][72]. Stearoyl-CoA Desaturase-1 SCD1 is the most characterized SCD isoform. SCD1 transforms 85% of stearate and 51% of palmitate (from both dietary and lipogenesis origin) into MUFA [68]. Many studies SCDs are the rate-limiting enzymes of MUFA formation. They are integrated into the ER membrane and are highly regulated by nutritional status and by hormonal regulators of appetite such as insulin [58,59]. SCDs introduce a delta-9 desaturation in SFAs stearate (C18:0) and palmitate (C16:0), forming the MUFAs oleate (C18:1n-9) and palmitoleate (C16:1n-7), respectively. These MUFAs are the main components of TGs (fatty acids that are preferentially stored) [60], cholesterol esters (cellular membrane components, precursors to steroid hormones and biliary acids) [61] and wax esters (compounds preventing evaporative water loss) [62]. They also constitute a large proportion of phospholipids comprising cellular membranes [57]. As such, SCDs are considered key regulators of lipid homeostasis, especially in liver and adipose tissue where lipogenesis is predominant. Modulation of SCD activity has been implicated in the development of the metabolic syndrome and its associated inflammatory state. Therefore, several studies have suggested targeting SCDs in order to treat various aspects of the metabolic syndrome, including type 2 diabetes and cardiovascular diseases [63][64][65]. In humans, there are two SCD isoforms, SCD1 and SCD5. SCD5 is mainly expressed in the brain, while SCD1 is more ubiquitously expressed [66,67]. In mice, the situation is more complex as four isoforms have been characterized (SCD1-4). They all share 85% amino acids homology with human SCD1, while SCD5 appears to be specific to primates. Mouse SCD1 is mainly expressed in lipogenic organs such as liver and adipose tissues. SCD2 is chiefly expressed in the brain, while SCD3 is found in the harderian, preputial and sebaceous glands. SCD4 expression has only be reported in the heart [68][69][70][71][72]. Stearoyl-CoA Desaturase-1 SCD1 is the most characterized SCD isoform. SCD1 transforms 85% of stearate and 51% of palmitate (from both dietary and lipogenesis origin) into MUFA [68]. Many studies have been performed in SCD1 knockout mice to better understand its role in metabolic processes. Global SCD1 knockout mice, in which every cell of the organism is SCD1 deficient, present with lack of sebum secretion and of lacrimal surfactant [73]. The lack of sebum gives rise to very dry skin with less hair and has led to the consideration of topical SCD1 inhibition as a potential treatment for acne. Global SCD1 knockout mice are protected against obesity [74], insulin resistance [75] and fatty liver disease [61], as induced by both high-carbohydrate diet (HCD) [76] and highfat diet (HFD) [74,75]. These mice display increased levels of plasma ketone bodies while the levels of circulating insulin and leptin are reduced [75]. Glycemia is also improved, as determined by a glucose tolerance test. The metabolic profiles of global knockout mice are more beneficial than their wildtype counterpart, as seen through the upregulation of lipid oxidation and the downregulation of lipid synthesis genes [74,76]. Because of the difference of global knockout mice, mice with a specific deletion of SCD1 in the liver are only protected from the deleterious effects of HCD (and not HFD). Under HCD, liverspecific knockout mice show a reduction of hepatic lipogenic enzyme gene expression as well as a reduction of plasmatic TG relative to controls [76]. As could be expected, these mice display a decrease of hepatic steatosis and associated metabolic complications such as hypercholesterolemia. This is consistent with diminished activation of SREBP-1 (as measured by protein maturation and nuclear localization levels) and with increased protein expression of the lipolysis transcription factor PPARα and the mitochondrial uptake acyl transporter Carnitine O-palmitoyl transferase 1 (CPT1) in the liver of global SCD1deficient mice [77]. However, under HFD, liver-specific knockout mice develop hepatic steatosis and insulin resistance [78]. The steatotic effect of HFD on liver-specific knockout mice is probably due to the presence of SFA in the diet, which can be desaturated and integrated into TG, and, subsequently, into chylomicrons by enterocytes that still express SCD1. The chylomicrons can then be captured by the liver leading to hepatic steatosis and associated hepatic dysfunctions [76,79]. SCD1 expression is chiefly controlled by the lipogenic transcription factor SREBP-1c [77,80]. Under post-prandial conditions, the rise of lipemia and glycemia induce insulin secretion, one of the most important lipid anabolic hormones. Insulin activates the PI3K-PKB-mTORC1 signaling pathway, which induces the nuclear translocation of SREBP-1c and activates expression of enzymes involved in lipogenesis, including SCD1 [81]. There are others lipogenic transcription factors activated by dietary and hormonal factors such as insulin and glucose. Expression of lipogenic genes such as SCD1, FAS and ELOVL6 is triggered by the Liver X receptor (LXR), which is activated by insulin and by the Carbohydrate response element binding protein (ChREBP), itself activated by glucose [82]. One of the main LXR targets in lipid metabolism (especially of the LXRα isoform) is SREBP-1c, driving the expression of SCD1 [83]. Furthermore, MUFA (products of SCD activity) can regulate lipogenesis through AMPK phosphorylation [84,85]. Phosphorylated AMPK inhibits the mTORC1 complex [86], reducing the nuclear translocation of SREBP-1c and the expression of lipogenic genes like SCD1. Human Studies-Effect of Dietary SFAs The type of lipids present in animal organisms is strongly influenced by diet [87]. Dietary SFAs are deleterious to metabolic health as they play an important role in the development of obesity, metabolic syndrome and chronic inflammation [88]. In fact, high SFA levels in the diet can be considered a pro-inflammatory factor in itself. Several studies have described clear correlations between the consumption of Western diets, rich in SFA, and the presence of obesity, hepatic steatosis and type 2 diabetes in humans [89][90][91]. Acute intake of SFA-rich diets triggers the development of an inflammatory profile in human subcutaneous adipose tissues, which includes increased expression of several genes involved in the synthesis of pro-inflammatory chemokines and cytokines [92]. In addition, compared to unsaturated fatty acid-rich diets, SFA-rich diets increase lipid storage within adipose tissues [90]. The adipocytes develop larger LDs and, therefore, contain more TGs. This increased intracellular TG pool leads to increased leptin secretion by adipocytes [93]. Furthermore, high circulating leptin is correlated with increased macrophage secretion of IL-1β, IL-6 and TNF-α [94,95]. A clinical trial has shown that a single 1000 kcal meal containing 60% fat (mainly SFA) leads to elevated plasmatic IL-6 concentrations [96]. This type of systemic inflammation is associated with vascular damages leading to coronary heart disease [96]. Animal Studies-Effect of Dietary SFAs In agreement with observations made in humans, feeding rodents with diets rich in saturated fat increases hepatic and plasmatic TG levels and raises circulating IL-6 concentration [97,98]. Animals also develop glucose intolerance while macrophage recruitment in the liver is increased [97,99]. This suggests that inflammation is a consequence of dietinduced metabolic changes. Indeed, mice fed during 15 weeks with a HFD containing a majority of SFAs display an increased expression of hepatic TLR4 [98]. These animals also show elevated plasmatic concentration of IL-6, TNF-α and MCP-1, and lowered plasmatic concentration of the anti-inflammatory cytokine IL-10 [98]. Mice under SFA-rich HFD develop muscle steatosis due to accumulation of palmitate and stearate [100]. SFAs can also induce inflammation in the central nervous system. Brains of mice fed during 8 weeks with a HFD (composed mainly of SFA) display high concentrations of inflammatory markers (IL-6, IL-1β and TNF-α) and low levels of IL-10 [101]. Mice on a SFA-rich diet for as little as 4 weeks show elevated activation of NF-κB and, through TLR4 activation in the hypothalamus, expression of inflammatory markers (IL-1β, TNF-α and IFN-γ) in the brain as well as in the plasma [102,103]. This inflammation can even contribute to the development of obesity, at least in mice. Sustained HFDinduced inflammation in the arcuate nucleus, a specific region of the hypothalamus that regulates energy homeostasis, triggers microglia recruitment and fosters the death of satiety neurons [104]. Cellular Models-Effect of Exogenous SFAs In vivo studies are realized with diets containing a mix of several types of fatty acids, which are at least partially transformed during the digestion process. This complicates interpretation of the results of these studies. Therefore, treatment of cultured cells with exogenous fatty acids has been used to determine the effect of specific SFAs expected to be found in post-prandial circulation. Adipocyte cell models can provide insight into the in vivo mechanisms taking place within adipose tissue. Incubation of 3T3-L1 preadipocytes and rat primary epididymal adipocytes with palmitate for 24 h induces TNF-α and IL-6 secretion [105]. This treatment also increases the release of Monocyte chemoattractant protein-1 (MCP-1) [106,107], which has the potential to induce the recruitment of macrophages in vivo as well as their polarization into a M1 pro-inflammatory state. Exposition of pancreatic β cells (1.1B4 human cell line and rat primary cells) to palmitate increases secretion of IL-6 and IL-8, as well as ROS production. It is also associated with impaired insulin secretion [108,109]. This process has the potential to explain, at least in part, why saturated fat-rich diets lead to the development of type 2 diabetes. In mouse microglia BV2 cells, palmitate treatment during 4 h induces IL-1β, IL-6 and TLR4 gene expression, as well as NF-κB induction [103]. In the RAW 264.7 mouse macrophage cell line, lauric acid (a 12-carbon chain SFA) can directly bind TLR4 and activate the nuclear translocation of NF-κB. This subsequently activates the expression of pro-inflammatory cytokines, especially TNF-α [110,111]. Treatment of RAW 264.7 cells with palmitate inhibits the expression of the transcription factor PGC-1β, which indirectly activates the nuclear translocation of NF-κB [112]. This leads to increased secretion of inflammatory cytokines TNF-α and IL-1β in the medium. Interestingly, when this medium is added to cultured 3T3-L1 preadipocytes, activation of the PI3K-PKB pathway is impaired, suggesting a decrease in insulin sensitivity [113]. The effect of SFAs on muscle cells has also been studied in vitro. Treatment of C2C12 mouse myotube cells with palmitate increases lipid storage as observed via lipid droplet size [114]. As for other cell types, this intracellular lipid accumulation causes lipotoxicity (elevated ROS and ER stress) and insulin resistance (disruption in PKP signaling). It also triggers NF-κB nuclear translocation, leading to the expression of pro-inflammatory cytokines such as TNF-α [114]. Human Studies-Effect of Dietary MUFAs While SFAs increase inflammation, unsaturated fatty acids often have the opposite effect. Polyunsaturated fatty acids (PUFAs), especially the omega-3 class, have favorable effects on health. Several population studies have indeed demonstrated that, compared to SFA-rich Western diets, diets rich in omega-3 PUFAs exert beneficial metabolic effects at least in part by decreasing inflammation [115][116][117]. The effects of MUFAs on inflammation are less documented, but more and more evidences link MUFAs to anti-inflammatory states [92]. Dietary lipids are assimilated in the gut and then transported throughout the entire organism where they influence organ metabolism. Higher MUFA consumption increases MUFA levels, and reduces both SFA and PUFA, throughout the body [118]. The type of lipids present in our body can therefore be modulated through nutrition. The impact of the Mediterranean diet has been studied in humans, including in several randomized crossover studies (Table 1) [119][120][121]. This diet is characterized by a high consumption of fish, olive oil, fruits and vegetables, and whole grains. In this type of diet, fat constitutes one third of the total kcal absorbed with almost 60% MUFA and 20% SFA [122]. For comparison, the Western diet has a similar amount of total fat but with a much lower proportion of MUFA (36% MUFA and 33% SFA) [119]. Compared to other diets, the Mediterranean diet is associated with lower blood pressure, as well as improved glucose and lipid blood profiles [123][124][125]. The Mediterranean diet lowers cardiovascular disease risk and even leads to beneficial gut microbiome changes: increasing bacteroides, prevotella and faecalibacterium genera, which are known to improve general metabolic health and prevent atherosclerosis and thrombosis (Table 1) [121,126]. In fact, olive oil, one of the main components of the Mediterranean diet, has been characterized as a prebiotic improving the host-microbial ecosystem (Table 1) [120]. Interestingly, supplementation of food with olive oil (an oil that is naturally enriched with the SCD1 product oleate) correlates with low occurrences of obesity and metabolic syndrome, and therefore, less chronic inflammation and mortality [127,128]. Furthermore, people consuming a Mediterranean diet generally show lower levels of the systemic inflammation profile that often appears when Western or carbohydrate-rich diets are consumed (Table 1) [129][130][131][132]. Consumption of Mediterranean diet for 3 to 4 weeks is also correlated with increased secretion of adiponectin, an adipokine with anti-inflammatory effects [94,133]. Similar observations on inflammation are made when subjects are fed with olive oil (Table 1) [131,134,135]. Subjects fed with a diet rich in olive oil for a period ranging from 3 weeks to 2 years display lower levels of circulating mononuclear cells (monocytic cells involved in the inflammatory response). In addition, their plasmatic proinflammatory cytokine levels (such as TNF-α, MCP-1, IFN-γ, CRP, IL-18, and IL-6) are lower when compared to subjects on a Western diet for the same period of time [131,[136][137][138]. Compared to a one-time oral dose of a fat emulsion containing cow's milk cream (25% oleate and 26% palmitate), an emulsion of olive oil (70% oleate and 15% palmitate) generates a more favorable lipid plasmatic profile, including a higher plasmatic concentration of MUFArich TG. Interestingly, in the same study, the authors incubated mouse BV2 microglia cells with purified plasmatic lipoproteins from these subjects. Upon treatment, the incubated cells switched from a M1 pro-inflammatory state to a M2 anti-inflammatory state in the presence of MUFA-rich TG (Table 1) [139]. This observation has been confirmed in another study on isolated human blood monocytes [140]. Anti-inflammatory effects of MUFA have been reported when MUFA are part of a dietary intervention. However, increased MUFA levels in vivo do not always have positive impacts on inflammation. In patients with chronic kidney disease, an elevated MUFA/SFA ratio in blood lipids-presumably reflecting increased SCD1 activity-is correlated with high levels of circulating CRP, suggesting an aggravation of inflammation [141]. In obese patients that underwent bariatric surgery, the concentration of lipids in SAT (subcutaneous adipose tissue) and VAT (visceral adipose tissue) was measured by gas chromatography. The MUFA proportion in these SAT and VAT samples is negatively correlated with inflammation and obesity-related conditions such as insulin resistance and type 2 diabetes, as measured by gene expression [91]. Though the beneficial effects of the Mediterranean diet cannot simply be attributed to its high olive oil content, as it also contains many omega-3 fatty acids, these population studies strongly suggest that dietary MUFA have anti-inflammatory effects, especially compared to SFA-rich diets such as the Western diet. Animal Studies-Effect of Dietary MUFAs To further investigate the effects of MUFA in the diet, several studies have been performed on HFD-fed mice. These animals allow for measurements of metabolic and inflammatory markers throughout the organism, rather than only in the blood or on surgical samples. Mice raised for 4 weeks on a diet rich in olive oil show elevated plasmatic concentration of MUFA with no change in hepatic SCD1 gene expression [142]. As in humans then, MUFA-rich diets can be used to study the effects of MUFA on systemic responses in rodents. Studies performed in mice raised on a MUFA-rich diet for 15 weeks show higher circulating levels of anti-inflammatory markers (IL-4 and IL-10) and lower levels of proinflammatory markers (IL-6, MCP-1, IL-1β and TNF-α) compared to mice fed with a SFA-rich diet [98]. Even in obese and hypercholesterolemic mouse models, a MUFA-rich 8week-long diet improves metabolic features, increasing the expression of anti-inflammatory markers such as IL-4, IL-10 and PPARγ. In addition, a decrease in circulating level of proinflammatory IL-6, MCP-1, TNF-α, and IL-1β, and a larger proportion of M2 macrophages (compared to M1) is observed in adipose tissues [143]. In a study performed in male Wistar rats, animals were raised 12 weeks on HFDs (35% kcal from fat) with different SFA/MUFA/PUFA ratios [144]. Compared to a diet containing a higher proportion of SFA, increasing the proportion of MUFA improves insulin sensitivity and induces expression of the anti-inflammatory cytokine adiponectin, especially in subcutaneous adipose tissue. However, MUFAs are less effective than PUFAs in inducing the expression of the adiponectin gene. Higher MUFA or PUFA proportions are also correlated with lower circulating LDL-cholesterol levels [145]. The effects of fatty acids on inflammation were studied in mice fed for 15 weeks with isocaloric diets rich in either SFA or MUFA [98]. Compared to the SFA group, liver analysis of mice fed with MUFA shows less macrophage infiltration as well as a decrease in TG content and lipid peroxidation (measured via thiobarbituric acid reactive substances). The plasmatic lipid profile is improved, as well as insulin sensitivity (as measured by HOMA-IR). The levels of circulating pro-inflammatory CRP and MCP-1 are also decreased [98]. Interestingly, a very recent study has shown that switching mice from a SFA-based HFD to a MUFA-based HFD partially attenuates the progression of hyperglycemia, diminishing pancreatic inflammation and ameliorating β cell function [146]. Macrophage infiltration in the pancreas was lower in MUFA-HFD fed mice. The authors suggest that this effect is mediated by AMPK [147]. Interestingly, when compared to a diet rich in n-6 PUFA, an olive oil-rich 24-month-long diet protects cardiac mitochondria from age-related damages in rats [148]. A very promising study has recently shown that, compared to SFAs, dietary MUFAs reduce the pro-inflammatory profile in the mouse brain (and in human blood), stimulating M2 macrophage polarization. The authors even propose to use olive oil in nutraceutical strategies to treat diseases associated with a neuro-inflammatory profile [139]. Bone marrow-derived macrophages prepared from HFD-fed mice present a proinflammatory profile including macrophage M1 polarization and elevated secretion of IL-6 and TNF-α (Figure 3) [162]. The treatment of these macrophages with palmitoleate can switch the polarization of macrophages to M2 (Figure 3) [162]. Palmitoleate also activates AMPK, leading to a decrease of NF-κB nuclear translocation (Figure 3). This increases the expression of several anti-inflammatory factors such as MGL2, IL-10, TGFβ1, and MRC1 [162,163]. Incubation of mouse adipose stromal vascular fraction and bone marrow primary cultures with oleate inhibits LPS-induced IL-1β secretion [45,164]. In this situation, AMPK is activated, which in turn inhibits NLRP3 activation (responsible for IL-1β maturation) (Figure 3) [45,164]. Similar observations were reported on primary rat pancreatic islet cells [165]. MUFA also display protective effects in several other cell lines. For instance, oleate protects mouse muscle C2C12 cells from palmitate-induced insulin resistance and ER stress [166]. In mouse podocyte cells, derived from kidney epithelium, SFAs activate the cell death pathways associated with ER stress. This effect is reversed by oleate [167]. In the human endothelial EAHy926 cell line, palmitoleate decreases pro-inflammatory IL-6, IL-8 and MCP-1 secretion, and downregulates NF-κB (via PPARγ stimulation), as compared to palmitate [168]. Human Correlation Studies Given that SCD1 is the major enzyme involved in MUFA synthesis, several authors have hypothesized that an increase in expression and/or activity of SCD1 could be correlated with an improvement of patient inflammatory profile. In a study performed on young adults [169], a clear correlation was observed between the rs2060792 (A/G) single nucleotide polymorphism (SNP) upstream of the SCD1 gene and the level of circulating SFAs palmitate and stearate. European women bearing the major allele present with higher palmitate and lower stearate concentrations. Interestingly, this SNP was positively associated with obesity and a higher level of the circulating pro-inflammatory factor CRP, especially in women. In a study analysing surgical samples from human visceral adipose tissue of obese individuals, an enrichment of histone methylation (H3K4me3) in the SCD1 and IL-6 promoters was correlated with increased BMI. This histone methylation enrichment pattern was associated with lower SCD1 expression and higher pro-inflammatory TNF-α and IL-6 expression [170]. However, in overweight adults, high palmitoleate plasma concentrations, reflecting high SCD1 activity, is correlated with the occurrence of inflammatory fatty liver disease [171]. This increased SCD1 activity could be due to a compensatory mechanism triggered by high circulating concentrations of its substrate palmitate [20,172]. The results obtained in these human studies have not always shown a strict correlation between SCD1 activity and inflammation. This suggests that the level of endogenous synthesis is not the only factor behind the modulation of inflammatory state by MUFA. MUFAs (monounsaturated fatty acids) can inhibit NF-κB and NLRP3 activation, respectively, through direct binding to GPR120 (G-protein coupled receptor 120) or PPARs (Peroxysome proliferator activated receptors), and through AMPK (AMP-activated protein kinase) phosphorylation. By inhibiting macrophage M1 polarization, MUFAs potentiate M2 polarization. This figure was generated with Servier Medical ART. Human Correlation Studies Given that SCD1 is the major enzyme involved in MUFA synthesis, several authors have hypothesized that an increase in expression and/or activity of SCD1 could be correlated with an improvement of patient inflammatory profile. In a study performed on young adults [169], a clear correlation was observed between the rs2060792 (A/G) single nucleotide polymorphism (SNP) upstream of the SCD1 gene and the level of circulating SFAs palmitate and stearate. European women bearing the major allele present with higher palmitate and lower stearate concentrations. Interestingly, this SNP was positively associated with obesity and a higher level of the circulating proinflammatory factor CRP, especially in women. In a study analysing surgical samples from human visceral adipose tissue of obese individuals, an enrichment of histone methylation (H3K4me3) in the SCD1 and IL-6 promoters was correlated with increased BMI. This histone methylation enrichment pattern was associated with lower SCD1 expression and higher pro-inflammatory TNF-α and IL-6 expression [170]. However, in overweight adults, high palmitoleate plasma concentrations, reflecting high SCD1 activity, is correlated with the occurrence of inflammatory fatty liver disease [171]. This increased SCD1 activity could be due to a compensatory mechanism triggered by high circulating concentrations of its substrate palmitate [20,172]. The results obtained in these human studies have not always shown a strict correlation between SCD1 activity and inflammation. This suggests that the level of endogenous synthesis is not the only factor behind the modulation of inflammatory state by MUFA. Animal Genetic Models Both human and animal dietary studies clearly argue for a beneficial role of MUFA on the inflammatory status. Given that MUFAs are a product of SCD1 activity, the deletion of this enzyme should reduce the availability of MUFA (and increase SFA accumulation), leading to increased inflammation. SCD1-deficient mice are a useful tool to study the effect of endogenous MUFA synthesis on lipid metabolism and inflammation processes. The asebia mouse model is deficient for SCD1 due to a naturally occurring genomic deletion. As in the SCD1 knockout mice, asebia animals display eye inflammation, a lack of sebaceous glands, and an absence of hair within scarred dermis [173,174]. In skin-specific SCD1 knockout mice, expression of pro-inflammatory genes IL-6, TNF-α and IL-1β is increased around hair follicles [175,176]. By inducing follicle cell death, this inflammation contributes to hair loss [177]. Like SCD1 knockout mice, asebia mice are protected from HFD-induced obesity, hepatic steatosis and glucose intolerance [178][179][180]. However, compared to wildtype mice, they exhibit a complex inflammatory profile including circulating pro-inflammatory markers such as IL-6 and IL-1β [181]. Adipose tissue-specific SCD1 knockout mice are protected against Western diet-induced obesity and fatty liver disease [74]. Their WAT displays a lower concentration of MCP-1 and TNF-α compared to WAT from wildtype mice, even when they are raised on HFD (60% kcal fat, mainly lard). Enterocyte-specific SCD1 knockout mice display an increase in pro-inflammatory markers IL-6 and TLR4 within their colon and ileum [182]. Interestingly, these enterocytespecific effects can be rescued by an oleate-rich diet [183]. Intriguingly, enterocyte-specific SCD1 knockout mice show diminished expression of the TLR4 receptor in the jejunum, suggesting a protection against inflammation [182]. Liver-specific SCD1 knockout mice display an increase in pro-inflammatory markers IL-1β and TNF-α within their liver [184]. These knockout mice models exhibit a reduction in the expression of lipogenic markers ACC, FAS and SREBP-1c. This potential for diminished palmitate synthesis could attenuate the inflammatory effects of SCD1 depletion. Cellular Models Several studies address the specific role of SCD1 in cellular models of inflammation. Silencing or inactivation of the SCD1 gene in the murine preadipocyte 3T3-L1 cell line exacerbates the effects of SFAs, increasing the expression of the pro-inflammatory markers TGF-β, IL-6 and MCP-1, and decreasing the anti-inflammatory IL-10 [185,186]. Similar results are observed in the EndoC-βH1 human pancreatic β cell line. Silencing SCD1 aggravates the lipotoxic effect of palmitate on inflammatory marker expression and, interestingly, oleate and palmitoleate treatments rescue these effects [187]. Incubating RAW 264.7 macrophages with a conditioned media obtained from primary adipocytes isolated from global SCD1 knockout mice decreases expression of both TNF-α and IL-1β proinflammatory cytokines [188]. SCD1 silencing in mouse primary macrophages renders the TLR4 receptor hypersensitive, which exacerbates the gene expression of pro-inflammatory cytokine (IL-1β, MCP-1 and IL-6) [189]. TLR4 hypersensitivity is thought to stem from increased SFA proportions within membrane phospholipids [189]. Other technical approaches allow for insight into the effect of SCD1 overexpression. In primary human myotube cells, overexpression of SCD1 prevents palmitate-induced ER stress and IL-8 gene expression [190]. Mesenchymal stromal cells (MSC) can be prepared from posterior iliac crest bone marrow extracted from patients [191]. When these MSC cells are treated with T0901317 (an LXR agonist), SCD1 and LXRα expression are increased. This treatment reduces palmitate-induced Caspase 3/7 activation and expression of proinflammatory IL-6 and IL-8. When MSC cells are incubated with the specific SCD1 inhibitor CAY 10566, the effect of the LXR agonist is abrogated. This suggests that, at least in bone marrow stromal cells from these patients, SCD1 is involved in the prevention of inflammation and apoptosis induced by palmitate [191]. More recently, a study has been performed using primary hepatic cell isolated from Gprotein coupled receptor 120 (GPR120) deficient mice. This receptor interacts with MUFA, especially palmitoleate [192]. The activation of GPR120 by palmitoleate, is involved in the resolution of palmitate-induced inflammation through a reduction of NF-κB activity. Interestingly, in these cells, a correlation between SCD1 expression and GPR120 activity is observed [193]. Inhibiting SCD1 in cells leads to increased inflammation. This is probably due to a combination of lower intracellular MUFA concentration and, undoubtedly, higher intracellular SFA concentration. Conclusions As presented throughout this text, dietary fat intake has an undeniable impact on inflammation. There is evidence that chronic low-grade inflammation can be prevented by lifestyle interventions. The SFA-rich Western diet can induce chronic inflammation and increase the risk of developing obesity-related metabolic disorders such as cardiovascular diseases, type 2 diabetes, and hepatic steatosis. At the opposite, a Mediterranean diet especially rich in oleate is favorable to an anti-inflammatory state and is associated with a decreased risk of metabolic syndrome development. Indeed, both human and animal diet studies have shown that substitution of SFA by MUFA activates beneficial anti-inflammatory mechanisms (M2 macrophage polarization, adipocyte IL-10 secretion, inhibition of NLRP3 inflammasome) and reverses the deleterious effect of SFAs on adipose tissues, hepatic tissue and β cells. Many mechanisms presented here can account for the protective effects of dietary oleate and high levels of circulating MUFAs. The addition of MUFA in diets can therefore be a potential nutraceutical avenue to decrease chronic inflammation and, subsequently, to ameliorate the general metabolic profile. In accordance with the beneficial effects of dietary MUFAs, some studies have shown that inhibiting SCD1 aggravated the deleterious effects of SFAs. This is probably due to an increase of SFA levels (SCD1 substrates). Thus, SCD1 is an interesting therapeutic target to decrease intracellular SFA concentration in favour of MUFA. However, other studies have shown that SCD1 inhibition can have favourable outcomes. SCD1 deletion protects mice against the deleterious effects of SFA-rich HFD and even improves the metabolic profile of humans and animals. In this case, the protective effects of SCD1 deletion cannot be attributed to MUFA activity in the organism. In fact, we and others have shown that SCD1 deletion inhibits lipogenesis [74,76,77,79,182]. This can be attributed to inhibition of SREBP-1c oleylation, decreasing its transcriptional activity [77]. This aspect of SCD1 activity deserves to be further investigated to better understand its specific role in inflammation.
2021-01-06T06:18:54.363Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "9d311db5b1d005b647f9834f421bf9c166571c61", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/1/330/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d15c05a071fc00522845da34823813fa6d6a2dd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
87533
pes2o/s2orc
v3-fos-license
Calcium influx into neurons can solely account for cell contact-dependent neurite outgrowth stimulated by transfected L1. We have used monolayers of control 3T3 cells and 3T3 cells expressing transfected human L1 as a culture substrate for rat PC12 cells and rat cerebellar neurons. PC12 cells and cerebellar neurons extended longer neurites on human L1 expressing cells. Neurons isolated from the cerebellum at postnatal day 9 responded equally as well as those isolated at postnatal day 1-4, and this contrasts with the failure of these older neurons to respond to the transfected human neural cell adhesion molecule (NCAM). Human L1-dependent neurite outgrowth could be blocked by antibodies that bound to rat L1 and, additionally, the response could be fully inhibited by pertussis toxin and substantially inhibited by antagonists of L- and N-type calcium channels. Calcium influx into neurons induced by K+ depolarization fully mimics the L1 response. Furthermore, we show that L1- and K+(-)dependent neurite outgrowth can be specifically inhibited by a reduction in extracellular calcium to 0.25 microM, and by pretreatment of cerebellar neurons with the intracellular calcium chelator BAPTA/AM. In contrast, the response was not inhibited by heparin or by removal of polysialic acid from neuronal NCAM both of which substantially inhibit NCAM-dependent neurite outgrowth. These data demonstrate that whereas NCAM and L1 promote neurite outgrowth via activation of a common CAM-specific second messenger pathway in neurons, neuronal responsiveness to NCAM and L1 is not coordinately regulated via posttranslational processing of NCAM. The fact that NCAM- and L1-dependent neurite outgrowth, but not adhesion, are calcium dependent provides further evidence that adhesion per se does not directly contribute to neurite outgrowth. Abstract. We have used monolayers of control 3T3 cells and 3T3 cells expressing transfected human L1 as a culture substrate for rat PC12 cells and rat cerebellar neurons. PC12 cells and cerebellar neurons extended longer neurites on human L1 expressing cells. Neurons isolated from the cerebellum at postnatal day 9 responded equally as well as those isolated at postnatal day 1-4, and this contrasts with the failure of these older neurons to respond to the transfected human neural cell adhesion molecule (NCAM). Human Ll-dependent neurite outgrowth could be blocked by antibodies that bound to rat L1 and, additionally, the response could be fully inhibited by pertussis toxin and substantially inhibited by antagonists of L-and N-type calcium channels. Calcium influx into neurons induced by K + depolarization fully mimics the L1 response. Furthermore, we show that L1-and K+-depen -dent neurite outgrowth can be specifically inhibited by a reduction in extraceUular calcium to 0.25 ttM, and by pretreatment of cerebellar neurons with the intracellular calcium chelator BAPTA/AM. In contrast, the response was not inhibited by heparin or by removal of polysialic acid from neuronal NCAM both of which substantially inhibit NCAM-dependent neurite outgrowth. These data demonstrate that whereas NCAM and L1 promote neurite outgrowth via activation of a common CAM-specific second messenger pathway in neurons, neuronal responsiveness to NCAM and L1 is not coordinately regulated via posttranslational processing of NCAM. The fact that NCAM-and L1dependent neurite outgrowth, but not adhesion, are calcium dependent provides further evidence that adhesion per se does not directly contribute to neurite outgrowth. ONAL and dendritic growth and arborization are central to the development and regeneration of the nervous system. Both processes are likely to depend upon the functional interplay between a vast array of environmental cues provided by components of the extracellular matrix, as well as by molecules present on the surface and secreted by cells with which the neuronal growth cone comes into contact (reviewed in Bixby and Harris, 1991;Lumsden and Cohen, 1991). Over the last decade a very large number of extracellular matrix and integral membrane glycoproteins that mediate contact-dependent axonal growth have been identified. Prominent among the neuronal growth cone receptor systems that recognize and transduce positive growth signals are members of three gene families, namely the integrins (Reichart and Tomaselli, 1991;Hynes, 1992), the Ig gene superfamily (Williams, 1987;Walsh and Doherty, 1991;Rathjen and Jessell, 1991), and the cadherins (Takeichi, 1991). Evidence from antibody perturbation experiments has shown that when neurons extend neurites over complex cellular substrata (e.g., astrocytes, myoblasts, or Schwann cells), a cocktail of antibodies that block the function of ~l-integrins, the neural cell adhesion molecule (NCAM) ~ and L1 cell adhesion molecules (CAMs) (both Ig superfamily members), and N-cadherin are often required for a maximal inhibition of neurite outgrowth (e.g., see Bixby et al., 1987Bixby et al., , 1988. These studies suggest that contact-dependent growth of axons requires the integration of signals arising from a number of receptor-ligand interactions. N-cadherin and L1 in neurons can promote neurite outgrowth following their homophilic binding to products of the same gene purified and coated to a tissue culture substratum (Lemmon et al., 1989;Bixby and Zhang, 1990). L1 in neurons can also promote neurite outgrowth following heterophilic binding to a distinct but related gene product called Axonin-1 (Kuhn et al., 1991). Similarly, when N-cadherinand NCAM-deficient cells are transfected with cDNAs encoding these molecules, expression of the transgene can be correlated with an increase in the ability of the transfected cell to promote neurite outgrowth from a wide variety of neuronal cell types (Matsunaga et al., 1988;Doherty et al., , 1990aDoherty et al., , 1991a. In the case of NCAM, neurite outgrowth was shown to be dependent on NCAM in both the neuron and substratum supporting a homophilic binding mechanism (Doherty et al., 1990b). The use of a transfection-based strategy to study CAMs offers a number of advantages over more conventional methods of biochemical purification and coating to a substratum. For example, in the latter case the coated molecule is often required to support both adhesion and consequently neurite outgrowth, and it remains unclear how these distinct functions are related (e.g., see Doherty et al., 1992a). In transfection models, the control substratum (i.e., untransfected cells) can be selected for its ability to support adhesion per se, allowing the neurite outgrowth promoting activity of the transfected CAM to be studied on its own. More importantly, facets of function relating to the lateral diffusion of CAMs in membranes and/or their ability to interact with cytoskeletal elements in a cellular substratum are obviously lost when the CAM is studied as a purified molecule. In this context, recent results have shown that NCAM isoforms that differ only in the size of their cytoplasmic domain (as a consequence of natural alternative splicing of the NCAM gene) differ considerably in their ability to promote neurite outgrowth and that this most likely relates to NCAMs lateral diffusion properties in the cellular substratum (Doherty et al., 1992b). The cDNAs encoding mouse (Moos et al., 1988), rat (Miura et al., 1991), and human L1 (Hlavin and Lemmon, 1991;Reid and Hemperly, 1992) have all been isolated and characterized. These cDNAs all encode proteins of ~1,260 amino acids that share *85 % identity between human and mouse. An alternatively spliced exon that encodes a four-amino acid peptide in the cytoplasmic domain was also identified in rat and human cDNAs. In the present study we have transfected mouse NIH-3T3 fibroblasts with a plasmid vector containing the full coding sequence of human L1. Stable clones expressing human L1 have been isolated and characterized for their ability to promote neurite outgrowth from rat PC12 pheochromocytoma cells (see Greene and Tischler, 1976;Doherty et al., 1991b) and rat cerebellar granule cells. Previous studies have suggested that a cis-interaction between L1 and NCAM in the same membrane may result in the formation of a potent receptor complex that can then interact better than L1 on its own for trans binding to L1 on a second membrane (Kadmon etal., 1990a,b). A similar functional interplay between NCAM and L1, that is primarily controlled by long chains of ct 2-8-1inked polysialic acid (PSA) on NCAM, has been suggested to be important for establishment of the correct innervation pattern in the chick hindlimb (Landmesser et al., 1990). In the present study we address three important questions relating to L1 and NCAM function in neurons. Firstly, does L1 induce neurite outgrowth via activation of the same neuronal second messenger pathway as NCAM and does this depend on the flux of extracellular calcium into neurons? Secondly, do neurons undergo a similar age-dependent toss of responsiveness to L1 as they do for NCAMdependent neurite outgrowth? Finally, is NCAM function required for Ll-dependent neurite outgrowth and is the latter directly modulated by the presence of PSA on neuronal NCAM? Our results clearly show that LI and NCAM can promote neurite outgrowth via activation of a common neuronal CAM-specific second messenger pathway, and that direct activation of the pathway is sufficient to fully mimic the response. In contrast, factors that operate to modulate NCAM-dependent neurite outgrowth, such as alternative splicing and reduced expression of PSA on neuronal NCAM, do not directly impinge on Ll's ability to promote neurite outgrowth. Furthermore, we provide novel data to support the postulate that CAM-dependent activation of second messengers is solely responsible for the neurite outgrowth response. Plasmid Construction Full-length human L1 cDNA (Reid and Hemperly, 1991) was subcloned from pBluescript into expression vectors pJ4fl (Morgenstern and Land, 1990) and pCDNA1 (Invitrogen) under the control of Mo MuLV LTR and CMV promoters, respectively. The L1 cDNA was removed from pBluescript using ClaI and XbaI (the latter site was end-repaired using the Klenow fragment of DNA polymerase 1) for ligation into ClaI and SmaI cut pJ4fl or NotI and XhoI for ligation into pCDNA1 cut with XmaIII and XhoI. The integrity of the inserted L1 eDNA was checked by partial sequence and restriction analyses. Transfection Cotransfection of human L1 with the selectable plasmid pH/~AP-l-neo (Doherty et al., 1991b) at a ratio of 20:1 was performed using the calcium phosphate transfection protocol provided with the CellPhect Transfection Kit (Pharmacia Fine Chemicals, Piscataway, NJ). 3T3 cells were grown for 24 h to a density of 1 x 104 cells per 60-mm petri dish before addition of the calcium phosphate-treated DNAs. Cells were cultured for 16 h at 37~ in complete media before transfer to 100/150-mm petri dishes containing DME, 10% FCS, 2 mM glutamate, and 0.5 mg/ml G418. After 10-14 d in culture, G41 g-resistant colonies were isolated and characterized for L1 expression. Characterization of Transfected Cells Control and G418-resistant clones were characterized for expression of LI by immunoeytochemistry and Western blotting using the 5G3 monoclonal antibody (Mujoo et al., 1986;Wolff et al., 1988) and the Neuro4 mAb. For the generation of the latter antibody, Balb/c mice were immunized with an adult human brain glycoprotein fraction. After fusion with p3x63Agg.653 cells and selection in HAT, the Neuro4 antibody was selected by immunoblotting of 200/190-and 140-kD bands in crude membrane fractions. Cultures were processed for immunocytochemistry by sequential incubation with 5G3 or Neuro4 (both at 1:500 dilution of ascites), biotinylated antimouse Ig and Texas red streptavidin (Amersham International, Amersham, UK) (both diluted 1:500) as previously described (Doherty et al., 1991a). Western blotting of whole cell extracts of control and transfected 3T3 cells and PC12 ceils was carded out essentially as previously described using the primary antibodies at a 1:200 dilution and the ECL Western blotting reagents from Amersham International (Moore et al., 1987;Doberty et al., 1991a). The relative level of human L1 on the various clones of transfected ceils was determined by measuring the binding of a saturating concentration of Neuro4 by standard enzyme-linked immunoadsorbent assay (Doherty et al., 1990a). Results obtained with 5G3 were no different from those obtained with Neuro4 and examples of the latter only are shown throughout. PC12 cells co-cultured on monolayers of control and transfected 3T3 cells (see below) were also immunostained with purified Ig fraction of a rabbit antiserum raised against mouse L1 (Rathjen and Schachner, 1984), using biotinylated anti-rabbit Ig and Texas-red streptavidin as above. Cell Culture and Neurite Outgrowth The neurite-outgrowth promoting activity of transfected human L1 was determined as previously described for transfected NCAM and N-cadherin (Doherty et al., 1991a(Doherty et al., , 1992a. In brief, rat cerebellar neurons isolated at PND 1-9 or naive and primed PC12 cells (see Greene, 1984) were cultured for 16-24 h on confluent monolayers of parental 3I"3 cells or clones of 3T3 cells expressing hunutn L1. Co-cultures were established by seeding ~1,000 PC12 cells or ~2,000 cerebellar neurons onto 3T3 cell monolayers established in individual chambers of eight-chamber Lab-Tek slides~ The co-culture media was SATO supplemented with 2% FCS (Doherty et al., 1992a). In some experiments the levels of calcium and magnesium were changed by direct supplementation of calcium/magnesium-free DME as indicated (see text). The average length of the longest neurite on PC12 cells and cerebellar neurons was determined using a Sight Systems Image Manager (Sight Systems, Newbury, England) as previously described (Doherty et al., 1991a). Other Reagents Pertussis toxin and K-252b were gifts from Dr. J. Kenimer and Dr. Y. Matsuda. Heparin, diltiazem, and verapamil were from Sigma Chemical Co. (St. Louis, MO). Nifedipinc was from Life Technologies Ltd. (Grand Island, NY), ~conotoxin MVRA was from Peninsula Laboratories (Liverpool, UK). Endo-N was a kind gift from Dr. J. Roth and the monovalent Fab fraction of antimouse L1 was generously donated by Dr. Fritz Rathjen. All of the reagents were used as previously described (see Doherty et al., 1990bDoherty et al., , 1991a at concentrations established to block their respective targets and also shown to have no nonspecific effects on neurite outgrowth. BAPTAIAM was obtained from Calbiochem Novabiochem (UK) Ltd. (Nottingham, UK). There was no difference in neuronal cell numbers on control and transfected monolayers in the presence and absence of any of these agents (our unpublished observations, but see Doherty et al., 1991). Expression and Characterization of Human L1 in 3T3 Cells NIH-3T3 cells were transfected with one of two distinct plasmids containing the full coding sequence of human L1 and clones selected that were resistant to G418 (0.5 mg/ml). These clones were initially characterized for cell surface expression of human L1 using the well characterized 5G3 mAb (Mujoo et al., 1986) that reacts specifically with human L1 (Wolff et al., 1988) and the Neuro 4 mAb that also reacts specifically with human L1 (J. Hemperly, unpublished observations; this study). Parental 3T3 cells showed weak to negative intracellular staining with both antibodies (not shown, but see Fig. 1 B), whereas a number of clones of transfected cells showed bright positive staining over the entire cell surface, again with both antibodies (e.g., see Fig. 1 A). Specific binding of antibodies to transfected cells was confirmed by quantitative enzyme-linked immunoabsorbent assay and a number of clones that expressed similar levels of human L1 were thus identified and expanded for further study (data not shown). The presence of the antigen on the cell surface was confirmed by the ability of both antibodies to stain live cells (data not shown). There were no obvious differences between cells transfected with the two plasmids. Human L1 was further characterized by immunoblotting. Both mAbs recognized a doublet band at ~150-160 kD in extracts of transfected 3T3 ceils but failed to show any specific binding to parental 3T3 cells ( Fig. 1 C). This is unlikely to relate to species-specific activity of the antibodies as the Neuro4 mAb bound to previously reported bands at 190/200 and 140 kD in rat PC12 cells and cerebellar neurons (not shown). Furthermore, a Fab fraction of a rabbit antiserum raised against mouse L1 (Rathjen and Schachner, 1984) showed very strong staining of rat PC12 cells with only low level background staining to 3T3 cells in co-culture ( Fig. 1 B). This antibody did not however recognize human L1 in the above transfectants. This result was confirmed by quantitative enzyme-linked immunoabsorbent assay; in three independent experiments there was no significant difference in the binding of this antibody to control and human L1 expressing 3T3 cells. These data show that parental 3T3 cells express negligible levels of endogenous L1, and that human L1 expressed in transfected cells exists as a doublet of 150-160 kD. For a comparison, human neuroblastoma cells express L1 as a diffuse component ranging from 200 to 215 kD with additional bands at ~150 kD. Removal of N-linked carbohydrates from the larger human L1 bands shifts the molecular mass to a 150-165-kD doublet (Wolff et al., 1988). Thus, L1 expressed in 3T3 cells runs at a similar molecular mass to L1 in human neuroblastoma cells but fails to show the same degree of heterogeneity, probably due to a more limited pattern of posttranslational processing. Similar resuits have been found with 3T3 ceils transfected with NCAM . Neurite Outgrowth on Monolayers of Control 313 Cells and 31"3 Cells Expressing Human L1 In our initial experiments, we cultured naive PC12 cells for 20-48 h on confluent monolayers of control 3T3 cells and 3T3 expressing human L1. In a typical experiment there was no obvious morphological response at 20 h, but a significant enhancement of neurite outgrowth was clearly apparent by 48 h. For example, in one experiment the mean length of the longest PC12 cell neurite was 35.6 • 2 gm on L1 transfectants as compared to 18.0 + 1 #m on parental 3T3 cells (P < 0.005, each value the mean + SEM of "~120 PC12 cells) with the percentage of these neurites >20 gm in length increasing from 40 to 79 %. Thus, L1 appears to stimulate neurite outgrowth to a similar extent as transfected NCAM and N-cadherin (Doherty et al., 1991a) and there was no obvious difference in the morphology of PC12 cells on monolayers expressing these individual CAMs (data not shown). To try to obtain a more rapid response from PC12 cells we initially cultured them for 3-6 d in NGF (~50 ng/ml) before culturing them on monolayers of control and transfected 3T3 cells (in the presence of NGF antibodies to neutralize any residual NGF). Fig. 2 shows the mean length of the longest PC12 cell neurite after 16 h of culture on monolayers of control 3T3 cells as compared to three individual clones of transfected 3T3 cells that express similar levels of L1. In each case the length of the longest neurite was significantly greater on L1 expressing cells (P < 0.005). Primed PC12 cells also showed a more rapid response to transfected NCAM and N-cadherin (see below) and this phenomenon may relate at least in part to NGF-induced increases in L1 (aKa NILE), NCAM, and N-cadherin in PC12 cells (McGuire et al., 1978;Mann et al., 1989;Doherty et al., 1991a). As the three L1 expressing clones (which vary in their level of expression by <15%; data not shown) promote neurite outgrowth by a similar extent, we have focused our attention on clone 1. Also, control experiments with PC12 cells showed that the most substantial benefit ofpretreatment with NGF was over a 3-4 day period, and this was therefore used in all subsequent experiments. The overall effect of human L1 on neurite outgrowth from primed PC12 cells, determined in the five independent consecutive experiments, is shown in Fig. 3. There was a highly significant 92 % increase in the mean length of the longest neurite, and the percentage of cells with a neurite >40 #m increased by a factor of 2.6 from 24 to 63 %. In parallel experiments, transfected NCAM and N-cadherin increased the length of the longest neurite by 102 + 12 (3)% and 84 :t: 21 (3)%, respectively, (both values mean + SEM for the given number of independent experi-Figure L L1 immunoreactivity in transfected 3T3 cells and PCl2 cells. (a) A culture of transfected 3T3 cells was fixed with 4% paraformaldehyde and stained with the Neuro4 mAb (1:500 dilution). Positive staining was found over the entire surface of the cells. (b) A co-culture of naive PC12 cells on a confluent monolayer of control 3T3 was fixed with paraformaldehyde and stained with rabbit antibodies raised against mouse L1. Note the bright positive staining on the PCI2 cells and the failure of the antibody to bind to control 3T3 cells. (c) The Neuro4 mAb recognized major bands at '~150-160 kD in Western blots of SDS extracts of transfected 31"3 cells (lane 1), but failed to bind to any bands in untransfected 3T3 ceils (lane 2). Bars, 50/~m. ments). Thus, over a •16-h period of co-culture all three CAMs promote neurite outgrowth from primed PC12 cells by a similar extent. Transfected NCAM and N-cadherin can also promote neurite outgrowth from a variety of primary neurons including rat cerebellar neurons (e.g., see Doherty et al., 1992a). In the present study cerebellar neurons isolated at PND 1, 2, 3, 4, and 9 were cultured for "-,24 h on confluent monolayers of control and human L1 expressing 3T3 cells before being fixed and the average length of the longest GAP-43 positive neurite was determined for each cell. Expression of human L1 was associated, in each of five inde- pendent experiments, with a significant (P < 0.005) neurite outgrowth promoting response. There was no evidence for a differential response between PND1 and PND9 and the pooled results from the five experiments are shown in Fig. 3 alongside those for primed PC12 cells. At PND9 the response to L1 (an increase in mean length from 32.1 + 2.0 #m to 72.0 + 4.7/zm, and in the percentage of cells with a neurite longer than 40 #m from 24.6 to 75.6%) was slightly greater than the average response. The same neurons rapidly lose their ability to respond to transfected NCAM over the PND6-PND8 period (Doherty et al., 1992a,b). Therefore neuronal responsiveness to NCAM and L1, in terms of neurite outgrowth, are not co-ordinately regulated. Antibodies to Neuronal L1 Block Human U-dependent Neurite Outgrowth To show unequivocally that the increased neurite outgrowth on L1 transfected cells was indeed dependent on L1 function, a Fab fraction of an anti-mouse L1 rabbit antiserum was added to cultures of both primed PC12 ceils and rat cerebellar neurons co-cultured on control and human L1 expressing 3T3 cells. This antibody bound avidly to rat L1 (see Fig. 1 B) but did not show any significant binding to control 3T3 cells (Fig. 1 B) and 3T3 cells expressing human L1 (see above). The results of a typical experiment are shown in Fig. 4. This antibody completely inhibited the human Ll-associated response from both PC12 cells and rat cerebellar neurons. In a total of three independent experiments (two with PC12 cells, one with cerebellar neurons) the L1 response was inhibited by 92.3 + 9.3% (mean + SEM). The specificity of the antibody reagent has been established by showing that it does not inhibit neurite outgrowth over control 3T3 monolayers nor does it inhibit NCAM or N-cadherin--dependent neurite outgrowth (Doherty et al., 1991a). As the antibody bound exclusively to L1 in the neuron these data provide substantive evidence that a homophilic binding of rat L1 to human L1 underlies the above response (see also Lemmon et al., 1989). 4O' 3T3 kl Figure 4. Antibodies to neuronal LI block human L1dependent neurite outgrowth. Primed PC12 cells were cultured on monolayers of contml and human LI expressing on 3T3 cells in control media or media supplemented with a monovalent Fab fraction of a rabbit antiserum to mouse L1 (at 250 t,g/ml). This antibody bound to the PC12 cells but not the monolayers (see text). After "016 h the cultures were fixed and the length of the longest neurite on each PC12 cell was determined. Each value is the mean + 1 SEM for 120-150 PCI2 cells sampled in replicate cultures. Pertussis Toxin Blocks the L1 Response Pertussis toxin ribosylates the a subunit of heterotrimeric G proteins of the Gi/Go families and thereby inhibits their function. We have previously shown that pertussis toxin can block NCAM-and N-cadherin-dependent neurite outgrowth from PC12 cells and that pretreatment of PC12 cells is sufficient for maximal inhibition (Doherty et al., 1991a). In the present study, pertussis toxin was added to PC12 cells and cerebellar neurons were cultured on control and human L1 expressing 3T3 cells with the result from the latter shown in Fig. 5. Pertussis toxin completely abolished the response to L1 without affecting basal (presumably integrin dependent) neurite outgrowth over control 3T3 cells. Results pooled from a total of four independent experiments (three with PC12 cells, one with cerebellar neurons) showed pertussis toxin to block the L1 response by 91.5 + 8.1%. In the presence of pertussis toxin, neurite outgrowth on control 3T3 monolayers was 102.7 + 4.0% of that found in the absence of toxin (both values mean + SEM). The target for pertussis toxin was a neuronal rather than 3T3 cell G-protein as demonstrated by the fact that pretreatment of neurons but not pretreatment of monolayers was sufficient for maximal inhibition of the response (data not shown). Figure 6. The effect of calcium channel antagonists on the LI response. Primed PC12 cells and cerebellar neurons were cultured on monolayers of control 31"3 cells or 3"1"3 ceils expressing human L1 in the presence and absence of antagonists of N-type calcium channels (o~-conotoxin at 0.25 #M), L-type calcium channels (diltiazem, verapanail, or nifedipine, all at 10 #M) or a combination of both (for details, see text). The percentage increase in mean neurite length on L1 expressing 3T3 cells as compared to control 3T3 ceils was determined in each instance and the results show the ability of calcium channel antagonists to inhibit this response. Each value is the mean + 1 SEM for the given number of independent experiments. None of these agents significantly affected neurite outgrowth over control 3T3 ceil monolayers (Doherty et al., 1991b). For example, in five experiments a combination of o~-conotoxin and an L-type channel antagonist reduced growth on parental 3T3 cells by 6.6 + 5.1% (mean + SEM). L-and N-type Calcium Channel Antagonists Inhibit Ll-dependent Neurite Outgrowth Verapamil, diltiazem, and nifedipine specifically block L-type calcium channels in cells, whereas w-conotoxin blocks N-type calcium channels (e.g., see Discussion). In the present study these reagents were tested for their ability to block the L1 response in both PC12 cells and cerebellar neurons. There was no significant difference in results obtained with each of the individual L-type channel antagonists (each tested twice) and no major difference in the results obtained with PC12 ceils and cerebellar neurons. The results have therefore been pooled and are summarized in Fig. 6. From these data it can be seen that blocking L-or N-type calcium channels on their own was sufficient to inhibit the L1 response by 91 + 5.2% (n = 6) and 80.8 + 8.0% (n = 5), respectively. When both were blocked the response was inhibited by 99.4 __+ 3.8% (n = 5). Again control experiments confirmed previous results by demonstrating that these agents do not modulate neurite outgrowth over parental 3T3 cells (see Fig. 6 legend). Reducing Extracellular Calcium or Preloading Neurons with a Calcium Chelator Inhibits LI-dependent Neurite Outgrowth The experiments with calcium channel antagonists suggest that an influx of extracellular calcium into neurons is required for CAM-dependent neurite outgrowth. To test this more directly, cerebellar neurons were cultured on monolayers of control and human L1 expressing 3T3 cells in the presence of varying levels of extracellular calcium. Fig. 7 A shows that basal neurite outgrowth (and hence cell viability) is not affected by reducing extracellular calcium to 0.25 mM or increasing it to 8 mM. In contrast, Ll-dependent neurite outgrowth was absolutely dependent on the extracellular calcium concentration being >0.25 mM, with the response peaking at 4 mM. Identical results were obtained for NCAMdependent neurite outgrowth (data not shown). Additional supporting evidence for calcium influx into the neurons underlying the response was obtained by preloading the neurons with BAPTA/AM (e.g., see Koike et al., 1989). This calcium chelator is membrane-permeant and enters the cell where it is sequestered by hydrolysis of its acetoxymethyl ester group. Once inside the cell it will chelate and thereby attenuate changes in intracellular calcium. The results in Fig. 7 B show that BAPTA/AM pretreated neurons can extend axons as normal on parental 3T3 cells. In contrast, Ll-dependent neurite outgrowth was significantly inhibited by pretreatment with 4 ~tM BAPTA/AM and fully inhibited by 20 /~M BAPTA/AM. Koike et al. (1989) have previously shown that pretreatment of sympathetic neurons with 20/~M BAPTA/AM or withdrawal of extracellular calcium can also block K § depolarization dependent survival, over a 48-h culture period. Calcium Influx into Neurons Fully Mimics CAM-dependent Neurite Outgrowth Second messenger pathway activation by K + depolarization has previously been shown to mimic the NCAM and N-cadherin response of PC12 cells (Saffell et al., 1992). In this study, potassium depolarization induced significant neurite outgrowth from cerebellar neurons cultured on monolayers of 3T3 fibroblasts. The effect of potassium (5-100 mM) was dose dependent with an optimal response found at a concentration of 40 raM. The response was comparable but not additive to CAM-induced neurite outgrowth, indicating the induction of a common pathway (Fig. 8). The potassiuminduced response could be inhibited by reduction of extracellular calcium to 0.25 mM, N-or L-type calcium channel blockers either on their own (not shown) or in combination, and treatment with the calcium chelator BAPTA/AM. Pertussis toxin did not inhibit potassium-induced neurite outgrowth from cerebellar neurons (Table I). It has previously been shown that pertussis toxin did not inhibit the potassium response of PC12 cells (Saffell et al., 1992). These data confirm that potassium-induced neurite outgrowth is also dependent on calcium influx through N-and L-type calcium channels. The failure of a combination of N-and L-type calcium channel inhibitors to fully block the response, most likely reflects the fact that potassium depolarization-induced increases in intracellular calcium can be substantially, but Control* K + at 40 mM* (i) Low calcium (ii) + Diltiazem and c~-conotoxin (iii) + BAPTA/AM (iv) + Pertussis toxin /zrn 34.5 + 3.0 (123) 78.7 • 5.6 (119)* 37.8 • 4.0 (123)* 42.8 + 4.4 (116)* 33.9 + 3.0 (134)* 77.5 + 4.0 (130)w Cerebellar neurons (PND4) were grown for 24 h on confluent monolayers of 3T3 cells in (a) control media, or (b) media supplemented with K + at 40 mM in the presence of reduced extracellular calcium (0.25 mM), diltiazem (10 I~M) and oJ-conotoxin (0.25 t~M), BAPTA/AM (20 /~M), or pcrmssis toxin (1 #g/ml). None of the treatments (i-iv) had any significant effect on neurite outgrowth on 3T3 monolayers in control media (see Figs. 5-7). The results show the mean neurite length of the longest neurite per cell + SEM for the given numbers of cerebellar neurons sampled from replicate cultures. * Significantly different from growth in absence of K + (P < 0.0005). $ Significant inhibition of K + response (P < 0.0005). w Nonsignificant difference from growth in presence of K+ (P < 0.25). not completely inhibited by these antagonists (Reber et al., 1992). Agents That Perturb NCAM Function Do Not Directly Modulate U-dependent Neurite Outgrowth Heparin binds to the second Ig domain of NCAM and blocks its function by either sterically hindering homophilic binding and/or preventing NCAM interactions with heparin-sulphate-containing proteoglycans (Cole and Glaser, 1986;Cole and Akeson, 1989). Heparin (250 #g/ml) completely blocks NCAM-dependent neurite outgrowth (Doherty et al., 1990a,b). In the present study, in the absence of heparin, L1 increased the length of the longest PC12 cell neurite by 91 :t: 8.9%, whereas in its presence neurite length was increased by 75 + 11% (both values mean + SEM for measurements made on ,o120 PC12 cells). Thus heparin does not block L1 function. The c~2-8-1inked PSA that is present predominantly, if not exclusively, on NCAM can be specifically removed by endoneuraminidase N (endo N) (Rufishauser et al., 1988;Doherty et al., 1990b). Removal of PSA from neuronal NCAM substantially inhibits NCAM-dependent neurite outgrowth over NCAM-transfected cells. At PND4, cerebellar neurons are particularly sensitive to removal of PSA (Doherty et al., 1992a). In the present study, a maximally active concentration of endo N was added to these neurons growing on parental and L1 expressing 3T3 cells. As previously reported, endo N had no effect on basal neurite outgrowth (38.0 + 3.0 /~m as compared with 36.0 + 2.7 #m), nor did endo N affect the enhanced growth apparent on L1 transfectants (67.3 + 4.3/~m as compared to 62.9 + 4.0 Ixm; both sets of values are the mean + SEM for ,0150 neurons measured in the presence and absence of endo N, respectively). Thus, the ability of neuronal L1 to bind to L1 in the substratum and transduce the recognition event into a cellular response is not directly modulated by the presence of PSA on neurons. Discussion Antibodies to L1 have been reported to inhibit granule cell migration (Lindner et al., 1983) and perturb fiber outgrowth (Fischer et al., 1986) in microexplants of the developing cerebellum. In addition, antibodies to L1 can induce defasciculation of axon bundles (Rathjen, 1988) and reduce neurite outgrowth along other neurites (Chang et al., 1987) and over the surface of Schwann cells (Seilheimer and Schachner, 1988;Bixby et al., 1988). In addition to NCAM, the expression of L1 and the immunologically related Ng CAM is reduced on both axons and Schwann cells after fiber tract formation, but all three molecules are upregulated after injury to the peripheral nervous system (Daniloffet al., 1986;Martini and Schachner, 1988). All of these data suggest that L1 may play an important role in fibre tract formation. In the present study, we have expressed human L1 in mouse NIH-3T3 fibroblasts. These cells have been shown to express negligible amounts of endogenous L1 by immunocytochemistry, immunoblotting, and by antibody perturbation. Expression of human L1 was associated with an enhanced ability of the transfected cells to promote neurite outgrowth from naive and primed PC12 cells and from rat cerebellar neurons isolated over the PND1-PND9 period of development. These responses could be fully inhibited by antibodies that specifically bind to and block the function of rat L1. These data provide substantive evidence that the human L1 promotes neurite outgrowth by directly binding to neuronal L1 (see also Lemmon et al., 1989). The ability of PC12 cells and primary neurons to respond to transfected NCAM and N-cadherin by increasing neurite outgrowth is dependent upon the activation of a common second messenger pathway in the neurons (Doherty et al., 1991a(Doherty et al., , 1992a. Activation of this pathway can be inhibited by pertussis toxin, and the main trigger for the response appears to be the opening of both N-and L-type calcium channels. Evidence for this comes from both the above perturbation studies, and also from more recent studies that demonstrate that direct activation of calcium channels can fully mimic the CAM response (Saffell et al., 1992). In the present study, we have provided the first evidence that Ll-dependent neurite outgrowth from PC12 cells and primary neurons involves activation of this (or a very similar) pathway. The L1 response could be fully inhibited by pertussis toxin or a combination of L-and N-type calcium channel antagonists. An unexpected observation was that L-or N-type antagonists could substantially (80-90%) inhibit the L1 response on their own. Similar results have now also been observed in a limited number of experiments for NCAM/N-cadherin-dependent neurite outgrowth from the same neurons. This contrasts with previous studies on naive PC12 cells (Doherty et al., 1991a) and on hippocampal neurons (Doherty et al., 1992c) where an inhibition of NCAM-dependent neurite outgrowth by >"~60% required the addition of both N-and L-type antagonists. The likeliest explanation of the current data is that a threshold level of calcium is required for the response, and that in some instances flux through both types of calcium channel is required to reach this value (see also Kater and Mills, 1991). Thus, regulation at the level of calcium influx could contribute to the previously reported threshold effect of NCAM on neurite outgrowth (Doherty et al., 1990a) and also for the synergism between cotransfected NCAM and N-cadherin in promoting neurite outgrowth (Doherty et al., 1991b). A greater than maximal activation of a single pathway would also readily explain the redundancy of individual CAMs apparent in some antibody perturbation studies (see Bixby et al., 1987). Direct evidence for a calcium influx into the neurons mediating Ll-dependent neurite outgrowth was obtained by showing that reduction in extracellular calcium, or pre-loading neurons with a calcium chelator, specifically abolished this response. A very important question is whether the above perturbants block a relatively specific CAM activated pathway or whether they simply block steps that are common to a variety of pathways that lead to neurite outgrowth. Our own published studies have shown that integrin dependent neurite outgrowth from PC12 cells and primary neurons is not inhibited by pertussis toxin and calcium channel antagonists. Likewise NGF dependent neurite outgrowth from PC12 cells is also not affected (Doherty et al., 1991a(Doherty et al., , 1992a. More recently these inhibitors have been shown to have no effect on neurite outgrowth stimulated by agents that operate by increasing the level of intracellular cAMP in PC12 cells (Saffell et al., 1992). Thus, to date, the only molecules that activate this pathway are NCAM, N-cadherin and L1, suggesting that this is indeed a CAM specific pathway for neurite outgrowth. That there are undoubtedly convergent steps downstream of calcium channel activation is demonstrated by the ability of K-252b, a general kinase inhibitor, to inhibit all of the above pathways that lead to neurite outgrowth (our own unpublished observations, see also Doherty et al., 1991a). Recent studies on transfected NCAM suggest that lateral diffusion in the substratum may be important for activation of the above pathway in neurons (Doherty et al., 1992b). The fact that at least three CAMs can activate the same pathway raises the possibility of an 'adaptor' molecule that can interact with several CAMs and also with the effector molecule(s). However in this context it should be noted that various CAMs are directly and/or indirectly associated with each other; for example antibodies to NCAM can co-cluster L1 (Kadmon et al., 1990b) and L1 and Axonin-1 co-localize to patches on cell somas and neurites (Kuhn et al., 1991). Thus the adaptor molecule could conceivably be one of the above CAMs. In addition, local hot spots of calcium channels have been described in the growth cone membrane, and these are associated with areas of morphological change (Silver et al., 1990). Thus the possibility that CAMs could directly activate calcium channels by co-clustering them should also be considered, although the fact that pertussis toxin can block the response clearly suggests that other molecules are involved. Purified CAMs coated to an otherwise inert substratum do not appear to promote neurite outgrowth via activation of the above second messenger pathway (P. Sonderegger and J. Bixby, individual personal communications). In these studies CAM dependent adhesion per se may be sufficiently permissive to allow for neurite outgrowth. Failure to activate the pathway may be directly related to the fact that the CAMs are immobilized on the substratum. Diffusional entrapment of adhesion molecules into transient clusters on one membrane may be dependent on similar events in the apposing membrane and this has been evoked as a mechanism for activation of second messenger pathways in lymphocytes (Singer, 1992). We would suggest that similar models, possibly including co-clustering of an adaptor or effector molecule, may account for activation of the CAM specific second messenger pathway in neurons. In contrast to neurite outgrowth, cell adhesion is associated with the formation of stable adhesion plaques and this most probably involves linkage of CAMs to the underlying cytoskeleton (e.g., see Nagafuchi and Takeichi, 1988). It has been suggested that PSA on neuronal NCAM can act as a global modulator of CAM function by sterically hindering membrane apposition and thereby modulating transbinding of a variety of CAMs and in particular L1 (Rutishauser et al., 1988;Landmesser et al., 1990). NCAM and L1 may be physically associated in the same membrane (Simon et al., 1991), and it is also conceivable that PSA could modulate L1 function via this cis-interaction. In the present study we have shown that removal of PSA from neuronal NCAM has no direct effect on L1 function as a neurite outgrowth promoting molecule. This completes a series of studies in which we have previously shown that this treatment can inhibit NcAM-dependent neurite outgrowth by up to 80% (Doherty et al., 1992a), but has no effect on integrin or N-cadherin-dependent neurite outgrowth. Thus in terms of neurite outgrowth, but not adhesion (Acheson et al., 1991) PSA can be considered as a highly specific modulator of NCAM function. PSA may operate by favoring the formation of transient rather than stable clusters of NCAM via charge repulsion and/or steric hinderance and this may favor neurite outgrowth at the expense of adhesion. As CAMs can clearly interact to promote neurite outgrowth, a direct modulation of NCAM function would in some systems be expected to indirectly modulate the function of other molecules and in particular L1. During development, cerebellar neurons lose their ability to respond to NCAM over a very short period (PND6-PND8), and this most probably relates to increased expression of NCAM isoforms containing the product of VASE exon (Doberty et al., 1992a and our own unpublished observations). The fact that the same neurons remain highly responsive to LI demonstrates that neuronal responsiveness to NCAM and LI is not co-ordinately regulated. In addition, the above data show that two independent mechanisms that down regulate NCAM-dcpcndent neurite outgrowth, i.e., loss of PSA and use of the VASE cxon, do not directly impinge on Lrs ability to promote neurite outgrow~. Finally, in the present study wc have shown that NCAMand L1-dcpcndcnt neurite outgrowth can be dissociated from NCAM/Ll-dcpcndent adhesion as relatively modest reductions in extracellular calcium inhibit the former but not the lat~cr (e.g., see Miura et al., 1992). The reduction in calcium did not impair neurite outgrowth per sc as this was unaffected on control 3T3 monolaycrs (see also Campenot and Drakcr, 1989). It follows that CAM-dependent adhesion does not directly contribute to the neurite outgrowth response. Rather, CAM-dependent neurite outgrowth would appear to be absolutely dependent on the ability of NCAM and LI to provide a recognition signal that is transduced into a cellular response via the activation of a CAM-specific second messenger pathway in neurons (Dohcrty and . The fact that Ll-dcpendent neuritc outgrowth can be fully inhibited by reduction of the level of extracellular calcium or by calcium channel blockers or prctreatrnent of neurons with a calcium chelating agent indicates that calcium influx into neurons is the key step in the L1-dcpendcnt response. Direct stimulation of calcium influx into neurons can, in the absence of any presumptive adhesion step, fully mimic the cell-contact-dependent neurite outgrowth response stimulated by L1, NCAM, and N-cadherin. We would therefore conclude that activation of this second messenger pathway is likely to be solely responsible for the neurite outgrowth promoting activity of a large number of CAMs.
2014-10-01T00:00:00.000Z
1992-11-02T00:00:00.000
{ "year": 1992, "sha1": "18fbb1325a993449674a80a08e35ef2b20ce5dc3", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/119/4/883/1063818/883.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b38b6b5dde747093fd2fa92cc66b827a276cb55c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119082334
pes2o/s2orc
v3-fos-license
The spatial extent of Polycyclic Aromatic Hydrocarbons emission in the Herbig star HD 179218 We investigate in the mid-IR the spatial properties of the PAHs emission in the disk of HD179218. We obtained mid-IR images in the PAH1, PAH2 and Si6 filters at 8.6, 11.3 and 12.5 mu, and N band low-resolution spectra using CanariCam on the GTC. We compared the PSFs measured in the PAH filters to the PSF derived in the Si6 filter, where the thermal continuum dominates. We performed radiative transfer modelling of the spectral energy distribution and produced synthetic images in the three filters to investigate different spatial scenarios. Our data show that the disk emission is spatially resolved in the PAHs filters, while unresolved in the Si6 filter. An average FHWM of 0.232", 0.280"and 0.293"is measured in the three filters. Gaussian disk fitting and quadratic subtraction of the science and calibrator suggest a lower-limit characteristic angular diameter of the emission of circa 100 mas (circa 40 au). The photometric and spectroscopic results are compatible with previous findings. Our radiative transfer (RT) modelling of the continuum suggests that the resolved emission results from PAH molecules on the disk atmosphere being UV-excited by the central star. Geometrical models of the PAH component compared to the underlying continuum point at a PAH emission uniformly extended out to the physical limits of the disk's model. Also, our RT best model of the continuum requires a negative exponent of the surface density power-law, in contrast to earlier modelling pointing at a positive exponent. Based on spatial and spectroscopic considerations as well as on qualitative comparison with IRS48 and HD97048, we favor a scenario in which PAHs extend out to large radii across the flared disk surface and are at the same time predominantly in an ionized charge state due to the strong UV radiation field of the 180 L_sun central star. Introduction Circumstellar disks around pre-main sequence stars constitute the reservoir of gas and dust out of which planetary systems may form. The study of their morphological structure and spectroscopic content, as well as of their temporal evolution provides important information used to constrain the models of planet formation. From the spectral shape of the infrared excess, Meeus et al. (2001) classify intermediate-mass Herbig stars as two groups based on the possible geometry of their dust disks. Group I objects exhibit a flared disk geometry while group II sources correspond to a flatter geometry of the circumstellar disk. Recently, a number of high-angular-resolution and highsensitivity spectroscopic studies have provided evidence showing the complex spatial structure of disks in the form of (pre-)transitional or "gaped" disks eventually harboring spiral structures (Calvet et al. 2002;Furlan et al. 2006;Espaillat et al. 2010;Gräfe et al. 2011;Tatulli et al. 2011;Benisty et al. 2015). A powerful tracer of the possible flared structure of the disk is found in the Polycyclic aromatic hydrocarbons (PAHs) midinfrared emission bands found in a significant number of Herbig stars (Acke et al. 2010). When in the direct line-of-sight of the central star, PAH molecules on the surface of a flared disk can be electronically UV-excited by stellar photons even at large distances in the disk and cool down by re-emitting in the CH-or CC-stretching and bending modes at characteristic wavelengths (e.g., at 6.3, 8.6, or 11.3µm). High-spatial-resolution imaging and long-slit spectroscopy in the PAH bands has exploited these properties to investigate the outer disk structure in HD 97048 (Lagage et al. 2006), or to trace possible gas flows through the (Maaskant et al. 2014). PAHs emission also trace the presence of very small grains mixed with the gas at high elevation above the midplane and have a significant influence on the structure of the disk by contributing to the gas heating (Habart et al. 2004). In this paper we present CanariCam (Telesco et al. 2003) highangular-resolution mid-infrared imaging and spectroscopy data for HD 179218, a Herbig star located at ∼290 pc 1 with a B9 spectral type that harbors a circumstellar disk primarily revealed through its infrared excess. Meeus et al. (2001) classified HD 179218 as a group-Ia source, which suggests a flared disk structure. A large amount of crystalline grains are found in this source, which points to significant dust processing . The latter two papers report the detection of PAHs at 8.6 µm and at 11.3 µm. These detections were later confirmed and quantified by Juhász et al. (2010). Regarding the spatial structure of HD 179218's disk, Fedele et al. (2008) used MIDI/VLTI mid-infrared interferometry to show that HD 179218 could be a pre-transitional disk. Furthermore, the authors noticed a lower visibility shortward of 9 µm that may result from a larger size scale of the PAH emission with respect to the continuum, but the scenario remains speculative based on the quality of the MIDI data. Here, we aim at resolving the disk emission in two PAH bands in order to constrain the global structure of the disk on the large scale (Wolf et al. 2012) as we may assume that the PAH molecules remain co-spatial with the gas (Woitke et al. 2016). The paper is structured as follows: Section 2 summarizes the new observations conducted on the GTC. Section 3 presents the observational imaging and spectroscopic results and Section 4 focuses on the derivation of the emission characteristic size. Section 5 presents our modeling to investigate the origin of the resolved emission, while our results are discussed in Section 6. Observations and data reduction We used CanariCam (Telesco et al. 2003), the mid-infrared (7.5 -25 µm) imager with spectroscopic capabilities of the Gran Telescopio CANARIAS in La Palma, Spain. Although the GTC has an equivalent 10.4-m primary mirror, the diffraction limit achievable with CanariCam is set by a cold circular pupil stop equivalent to 9.4-m used to optimize the sensitivity. CanariCam holds a Raytheon 320x240 Si:As detector which covers a field of view Illustration of the frame selection on the PSF calibrators HD 187642 (top) and HD 169414 (bottom) for the PAH-1 filter. The full distribution is given by the filled + empty symbols. The filled squares correspond to the frames finally selected. The dashed line is the theoretical diffraction limit of the telescope at 8.6 µm. The FWHM is estimated by fitting of a Lorentzian function. brator, which were observed in much better seeing conditions. The precipitable water vapor was measured between 7 mm and 8 mm, which is suitable for good-quality observations in the N band. Low-resolution spectroscopic observations were obtained on July 1, 2015, under good weather conditions with ∼0.8 ′′ seeing and 10 mm precipitable water vapor. The spectroscopic calibration was obtained by observing two different Cohen standard stars before and after the science target. The 0.36 ′′ slit was used with a 6 ′′ throw. The IDL pipeline iDealCam (Li 2014), which was customdesigned to reduce imaging data of Canaricam, was used for the data reduction. After removal of the thermal background, the pipeline produces individual frames (or saveset) of ∼2 s duration that can be combined in a long integration sequence. Savesets can be realigned along their centroid through two-dimensional Lorentzian 3 fit of the PSF prior to the final shift-and-add stacking. This allows reducing centering and tip-tilt errors that may otherwise lead to unwanted broadening of the PSF. Despite observing in the mid-infrared range with a good ∼0.5-0.6 ′′ average seeing, the atmospheric turbulence above a 10-m class telescope still degrades the image quality, resulting in a non-fully diffraction-limited PSF. Depending on the instantaneous strength of turbulence, the image quality of individual savesets can be significantly affected, which appears in the form of distorted and elongated PSFs. For all our targets, we visually inspected each saveset of the CanariCam dataset and discarded the most corrupted images, that is, where the PSF clearly departs from circular shape. We note that in order to avoid biases by selecting only the "best" images, we applied the same visual criterion for frame selection to both the PSF reference and the science targets. The saveset duration of ∼2 s is the same for the science and reference targets, which also have similar fluxes in the N band. The resulting on-source integration time for the final stacked images is reported in Table 2 together with the percentage of frame selection. The final stacked image is obtained after re-centering and co-addition of the "positive" and "negative" images resulting from the data reduction (see Sect. 3.2). Except for the first calibrator in the Si-6 filter for which the seeing conditions were not sufficiently good (see later), the procedure of frame selection resulted in discarding about 20% of the frames. We inspected approximately 448 savesets for the science target and approximatively 96 savesets for both reference stars and for the three filters. The frame selection resulted in an effective on-source total integration time for HD 178219 target of 1572 s, 1653 s and 1667 s for, respectively, the PAH-1 PAH-2 and Si-6 filters. Figure 1 illustrates the result of the process of frame selection on the FWHM distributions used in our analysis. The FWHM is estimated by fitting of a Lorentzian function. Points have been removed when they correspond to a visually distorted PSF. Some points lie below the theoretical diffraction limit of the telescope (dashed line) due to improper Lorentzian fit of the corresponding PSF and have been consequently removed for both the science and PSF calibrators. Importantly, we note that comparatively large values of the FWHM are not removed from the distributions as long as the visual inspection of the corresponding saveset is compatible with a circular PSF. In this way, we aim at limiting bias effects in the analysis of the FWHM distributions. The plots correspond, respectively from bottom to up, to the filters PAH-1, PAH-2 and Si-6. The integration time for the individual saveset for both the science and the calibrator is 2.1 s (PAH-1), 2.5 s (PAH-2) and 2.1 s (Si-6). The horizontal dashed lines show the theoretical diffraction limit of the GTC, which is 0.19 ′′ , 0.25 ′′ and 0.27 ′′ at respectively 8.6 µm, 11.3 µm and 12.5 µm, including the central obscuration. For the Si-6 filter, the vertical dotted line around frame #100 shows the effect of poor seeing on the measured FWHM distribution for the first calibrator. Statistics of the full width at half maximum Similarly to Moerchen et al. (2007), we have explored the statistical behavior of the PSF full width at half maximum (FWHM) distribution obtained after Lorentzian fitting of the individual selected savesets. By treating statistically individual 2s-short images and employing sub-pixel recentering for both the science and calibration targets, we minimize the influence of long-term biases (e.g., guiding errors, pupil rotation, and seeing fluctuations) that may result in the broadening of the final PSF when simply stacking the long sequence images. We extracted from the distribution of the FWHM data the mean and the error on the mean σ/ √ N, where σ is the distribution standard deviation and N is the number of savesets, respectively nodsets, in the distribution. Figure 2 shows the distribution of FWHM values of the individual savesets for each sequence calibrator-sciencecalibrator in the three filters. It is already possible to visually discriminate the vertical positioning of the bulk of the distribution for the science (blue open circles) and the adjacent calibrators (red open squares). The plots provide evidence that, for the PAH filters at 8.6 µm and 11.3 µm, the FWHM of HD179218 is on average larger that the FWHM of the adjacent calibrators. For the filter Si-6 at 12.5 µm, if we neglect the FWHM distribution of the first calibrator corrupted by poor seeing, the FWHM distribution for HD179218 does not exhibit any remarkable deviation Table 2. Measured mean FWHM and its resulting 3σ uncertainty from the distribution of savesets. FWHM L is estimated through a Lorentzian fit of the saveset's PSF. The term σ is here the error on the mean of the distribution, that is, the estimated root mean square divided by √ N with N being the number of selected frames. FWHM P is estimated graphically from the PSF profile according to the definition of the FWHM. The reported 3σ uncertainty is computed from the mean and error on the mean of the distribution of radial profile computed for each saveset. Therefore, the uncertainty obtained with both methods are comparable. from that of the calibrator. In Table 2 we report for the saveset distribution the estimated mean FWHM along with the corresponding 3σ error for the science and the calibrators in the three different filters. For consistency, we compared the FWHM statistics obtained with Lorentzian fitting to the statistics obtained by the direct graphical reading of the FWHM according to its definition. We can see that the Lorentzian fit approach systematically gives an average FWHM lower by ≤ 10% than the FWHM derived from a graphical reading. However, the relative trends between science and calibrators are reproducible. We observe that within the 99.7% confidence level the measured FWHM of the science target is larger than the one of the adjacent calibrators for both PAH filters, while it is the same in the Si-6 filter at 12.5 µm. We further assessed the resolved nature of HD 179218's emission by applying the criterion of Moerchen et al. (2010). Namely we compare the difference in FWHM between the science target and the calibrator star, that is, to the combined standard deviation (i.e., error) of the mean With FWHM sci − FWHM cal ≥ 3σ tot the science source can be considered as spatially resolved. According to this criterion, we give in Table 3 the result of this analysis and show that the circumstellar emission around HD 179218 is resolved at ≥3 σ confidence level in the two PAH filters, but is unresolved at 12.5 µm. (2) and (3), the first term corresponds to the Lorentzian fit method and the second term corresponds to the direct FWHM graphical read from the PSF profile. Retrieval of the PSF profile The final images are obtained after re-centering and stacking the individual savesets. Image re-centering has to be performed carefully to avoid a significant increase of the final FWHM typically observed in long observing sequences. A simple recentering step based on matching the individual image centroid induces an increase in the FWHM of the final stacked image by ∼15% compared to a single saveset. To improve the centering step, we realigned each saveset by minimizing the quadratic difference between the image of each normalized PSF and the image of the first PSF of the sequence taken as a reference. This operation is implemented with a sampling accuracy of the image of one fifth of a pixel. Such an approach led to a sharper PSF. Figure A.1 gives the Lorentzian-fitted FWHM of the stacked image as a function of the number of coadded savesets. The plots shows, within the limits of the available frames, that the FWHM of the stacked image tends to the statistical value found in Table 2. After recentering the individual savesets as described above, the radial profiles for the calibrator and science targets is built for the different available filters. To construct such a profile along with its associated error bars, we extracted the profile for each individual recentered saveset and computed the mean and the error on the mean for each radial pixel. The results for the three different filters are shown in Fig. 5 for the science and calibrator PSF. Photometry We performed aperture photometric calibration of HD179218 using HD169414 and HD187642 as Cohen photometric standards. With the two reference stars observed before and after the science target, we can also probe the long term photometric stability of the night. The aperture radius was optimized to 4×FWHM, or ∼1 ′′ . This size is close to the standard 5×FWHM recommended by photometry manuals. The residual background was estimated in a surrounding ring with an inner and outer radius of ∼1.6 ′′ (20 pixels) and ∼2.4 ′′ (30 pixels), respectively. For each science and calibrator target the photometric accuracy is derived from the measurement of the standard deviation of the flux over the savesets time series. The values for the photometric standards in the CanariCam filters are taken from www.astro.ufl.edu/∼dli/web/IDEALCAM_files/iDealCam_v2.0. Table 4. Photometric calibration of HD179218 in the CanariCam filters using the photometric standards reported in column (3). The reported errors are 3σ uncertainties. The larger uncertainty obtained with HD169414 in the Si-6 filters is due to the initial poorer conditions of the night. We note that the uncertainties reflect only the photometric stability of our measurement. It is however known that many of the Cohen standards show some variability, which results in an absolute photometric accuracy of about 10%. zip. The results of the photometric calibration are shown in Table 4 and are consistent with Spitzer spectroscopy data by Fedele et al. (2008) and Juhász et al. (2010). Spectroscopy The spectrum of HD 179218 was reduced with the RedCan pipeline (González-Martín et al. 2013) and is shown in Fig. 3. The reduced spectrum is found to overestimate the flux by ∼40% in comparison to our photometric values, which may have different causes (e.g., presence of cirrus or imperfect background subtraction). As we wish to make a relative comparison of the shape of the CanariCam spectrum and the Spitzer spectrum, the former one has been rescaled to the measured photometric values reported in Table 4. After rescaling, the relative comparison with the Spitzer spectrum shows good agreement in the shape with the visible peaks at ∼8.6-8.7 µm, 10.6 µm, and ∼11.2-11.3 µm. Qualitatively, the flux density is slightly larger in the PAH-1 filter for CanariCam than for Spitzer, while it is lower in the PAH-2 filter. The spectral calibration in the 9.3-9.9 µm region is strongly affected by the ozone atmospheric feature and residuals of the data reduction can be seen. We also remark that the flux measured by CanariCam between 9 and 9.2 µm appears overestimated by 10 to 20% in comparison to ISO and Spitzer. Si-6 ≤0.024±0.009 ≤0.035±0.011 Table 5. Angular diameter of the circumstellar emission measured in the three different filters. The uncertainty refers to the 1σ d error as derived in Mariñas et al. (2011). The source is considered resolved if the deconvolved (in the sense of quadratic subtraction) diameter is larger than 3σ d . The subscript L and P refer to the Lorentzian fit and to a direct measurement of the PSF profile, respectively. Quadratic subtraction of FWHMs For small differences in FWHM measurements, as in our case, PSF deconvolution is a delicate technique which strongly depends on assumptions made on the PSF and number of iterations. Similarly to Moerchen et al. (2010); Mariñas et al. (2011), an alternative to deconvolution is the estimate of the disk diameter from the quadratic subtraction of the science PSF and calibra- The errors associated to D d are calculated following Eq. 2 in Mariñas et al. (2011). Our estimates are reported in Table 5. Assuming a distance of 293 pc (cf. Sect. 5), we found in the two filters a comparable characteristic diameter of ∼24-30 au. On average, the disk emission appears slightly more extended in the PAH-1 filter than in the PAH-2 filter. The disk emission is found to be unresolved in the Si-6 filter in the sense of the 3σ d criterion. Gaussian disk We complemented the previous estimate with a simple approach to determine the characteristic size of the resolved emission. Namely, we model the emission as a two-dimensional face-on Gaussian disk convolved with the telescope PSF in the corresponding filters. This model only depends on the FWHM of the Gaussian function, though it might not be always sophisticated enough to reproduce the full shape of the PSF profile (core+wings). The characteristic size is estimated by visually matching the synthetic profile to the science profile within the experimental error bars. In this analysis we used the final images after recentering and stacking. In the PAH-1 filter, a Gaussian disk with a FWHM of 95±6 mas reproduces our science PSF profile, whereas a Gaussian disk with a FWHM ≤22±7 mas would remain spatially unresolved. In the PAH-2 filter, the Gaussian disk model fitting our science profile has a FWHM of 101±7 mas, and the unresolved disk would have a FHWM ≤25±7 mas. Finally, in the Si-6 filter the Gaussian disk must have a FWHM ≤36±7 mas so as not to exceed the FWHM of the science profile. This analysis confirms that the disk emission in HD179218 is resolved in the PAH filters and unresolved in the 12.5µm Si-6 filter. However, it is not possible to conclude within the error bars on a difference in the angular size of the emission in the PAH-1 and PAH-2 filters. (a) 3.66 (a) 9640 (a) 4.8 (a) 0.63 (b) 293 (c) Modeling Our imaging results show that the circumstellar emission around HD 179218 is spatially resolved in the two PAH filters whereas it remains unresolved at 12.5 µm. We attempt to identify the origin of the extended emission. A natural comparison arises with the case of HD96048 for which Lagage et al. (2006) resolve the emission of polycyclic aromatic hydrocarbons at the surface of the flared disk in direct view of the central star. In order to test this possible configuration in the case of HD 179218, we adopt the following strategy: we develop a radiative transfer model of a disk that simultaneously fits the SED and is spatially unresolved in the synthetic CanariCam image at 12.5 µm. The idea is to constrain the disk's size in the 12.5 µm band, which is dominated by the dust thermal emission and shows no significant presence of PAH emission (Acke et al. 2010;Juhász et al. 2010). A PAHfree disk emission model with the same parameters is then produced at 8.6 and 11.3 µm from which a synthetic observational profile can be extracted and compared to our observations. A similar strategy was successfully used by Honda et al. (2012) to model the gap's size of HD169142 in the Q band. In this approach, the modeling of the thermal emission at 12.5 µm gives us an upper limit on the outer disk's dimension while the inner regions are unresolved with CanariCam. If the 8.6 and 11.3 µm synthetic profiles are spatially unresolved, this would be a good indication that the observed resolved emission is not of thermal origin (in the sense of "continuum" origin). On the contrary, in case the 8.6 and 11.3 µm synthetic profiles are found to be spatially resolved, further observational constraint needs to be added through, for instance, mid-infrared interferometric data to be conclusive on the nature of the resolved emission in the PAH bands. Dominik et al. (2003) proposed a first radiative transfer model based on a single disk geometry with an outer radius of 30 au and a positive power-law index p=2 of the surface density. In a statistical study of HAeBe's disks, Menu et al. (2015) used simple geometrical temperature gradient models in combination with midinfrared interferometry data to infer a half-light radius of the disk of 7±1.2 au at 254 pc. Using nulling interferometry, Liu et al. derived a radius of 13.5±3 au based on a ring-like disk model at a similar distance of 244 pc. Moreover, Fedele et al. (2008) applied an achromatic geometrical disk model to VLTI/ MIDI data and proposed as their best solution a two-component pre-transitional disk structure with an inner disk extending from 0.3-3 au and an outer component whose bulk mid-IR emission lies in a 13-22 au region at 240 pc. Description of the disk model We developed radiative transfer disk models for HD 179218 aiming at constraining simultaneously the spectral energy distribution (SED) and the imaging data on the source. We used for this purpose the well-established Monte-Carlo code RADMC3D (Dullemond & Dominik 2004) that permits one to synthesize disk images and SEDs. Assuming a disk in vertical hydrostatic equilibrium and perfect gas/dust coupling, the dust density in g.cm −3 is modeled analytically according to ρ(r, z) = Σ(r) where ρ(r, z) is described via the parametrized dust surface density Σ(r)=Σ out (r/r out ) p and dust scale height H(r)=H out (r/r out ) (1+β) , and p is the surface density exponent and β the disk flaring exponent. The subscript out refers to the outer radius of each disk component considered. A difference with respect to earlier models is that we assume more recent estimates of the stellar parameters and parallax, which naturally influences the radiative transfer calculation and the production of the synthetic images. We used for the central star the parameters from Alecian et al. (2013), namely a luminosity L * =180L ⊙ , a radius R * =4.8R ⊙ , a mass M * =3.66M ⊙ and an effective temperature T eff =9640 K. A luminosity of 80-100L ⊙ was typically assumed in the earlier works. Based on a recent GAIA parallax measurement 4 of 3.41±0.35 mas, we assume a distance of 293 pc rather than the 240 pc found in the literature. The experimental SED of the system is taken from Acke & van den Ancker (2004) and contains photometric data from the literature as well as spectroscopic data from ISO. The RT grid extends from the dust sublimation radius to 200 AU. The disk temperature distribution is computed through a first Monte Carlo run using 10 6 photon packets. Since isotropic scattering is considered in our modeling, the scattering source function is then computed at each wavelength through an additional Monte Carlo run using 3×10 5 (resp. 5×10 4 ) photon packets for the images (resp. for the SED). A ray tracing method is then applied to compute the synthetic SED and images (at 8.6, 11.3, and 12.5 µm). In our RT modeling, we do not include PAHs, although they are clearly present, and model only the dust continuum. This is discussed later in the paper. Following Fedele et al. (2008), we assume as a starting baseline a passive irradiated pre-transitional disk structure consisting of an inner narrow ring, a low dust density gap region and a larger outer disk. The outer component is decomposed into a warm disk atmosphere component that will dominate the mid-IR emission, and a colder mid-plane component optically thick at 10 µm that will dominate the far-IR and sub-mm emission. The outer disk warm atmosphere will mostly influence our data in the mid-infrared (see Sects. 5.3 and 5.4). The dust opacities The continuum emission being entirely dominated by the dust, the grain composition and resulting opacity influences the shape and values of the SED from the near-IR to the sub-mm. As we do not aim at a detailed fit of the spectral feature already done elsewhere, our approach is to assume opacity laws detailed in the literature in order to place ourselves in a realistic case. For the outer disk, we assumed a composition of 90% amorphous silicate grains and 10% crystalline enstatite grains following the findings of Juhász et al. (2010). We assumed a size distribution ∝ a −3.5 (Mathis et al. 1977) from 0.1 µm to 100 µm for the amorphous grain population and a fixed size of 2 µm for the enstatite grain population. The midplane is populated with larger amorphous silicate grains to reflect dust sedimentation, with sizes ranging from 10 µm to 1 mm following a similar power-law size distribution to the one before. Finally, the inner disk/gap is populated with a mixture of amorphous silicate and highly refractory carbon grains at a ratio of approximately 9:1, respectively, in agreement with Dominik et al. (2003). Fitting procedure: SED and 12.5 µm image The procedure consists in best-fitting the SED through a χ 2 minimization and verifying a posteriori that the corresponding synthetic image is unresolved at 12.5 µm. For a given radiative transfer model, we thus produced a synthetic SED to be compared to the observational SED, and a synthetic disk image at 12.5 µm that we convolved with our PSF reference star in the Si-6 filter. A radial profile is then extracted to be compared with the observed profile at 12.5 µm. As a first step, we run our radiative transfer code to identify a reference baseline model (RBM) that provides a good visual fit to the SED without PAH. This model assumes the properties aforementioned (central star, mineralogy) as well as a pre-transitional disk structure (inner disk + gap + outer disk) (Fedele et al. 2008). A first exploration of the parameters shown in Table 7 (cf. caption) allows us to converge towards a possible solution for the RBM based on the SED fit. As a second step, on the basis of this RBM, we determined which parameters influence most significantly the mid-IR profiles and refine our search on these parameters by including the information on the 12.5 µm PSF profile. In this way, we avoid varying all the parameters of the model to minimize degeneracy effects. a) Inner disk: the inner radius is fixed at 1.1AU, which roughly fits the dust sublimation radius given our stellar parameters. We then tested the influence of the outer radius of the inner disk by varying it out to 5 au and varying the exponent of the power law from p=-2 to p=2. This corresponds to a range of exponents typically found for disk models (Dominik et al. 2003). We observe a mild influence of these parameters on the SED in the 2-3 µm near-IR region and no impact on the PSF profiles at 8.6, 11.3, and in particular at 12.5 µm. The inner disk scale height and flaring index have also no measurable impact on the midinfrared profiles. b) Gap: this low-density region hosts a dust mass of ∼10 −13 M ⊙ in the RBM. The power-law exponent of the surface density was varied from -2 to +2. This parameter did not impact the various PSF profiles either. c) Outer disk: the inner and outer radii of the outer disk component, R i and R o , along with the surface density power-law exponent were found to influence most significantly the PSF profiles at 12.5 um and the SED at mid-IR up to sub-mm wavelengths. We therefore concentrated on these three parameters in what follows. Exploration of the outer disk's parameters p, R i , and R o We have conducted a small parameter search by varying the power law p in the range {-2,+2} in steps of 0.5, the inner radius of the outer disk R i in the range {8 au,12 au} and the outer radius of the outer disk R o in the range {30 au,150 au}. We have simultaneously compared our synthetic SED and 12.5-µm PSF profile to our observations. Table B.1 gives the value of the non-reduced χ 2 for the SED fit as a function of (p, R i , R o ) and highlights the models for which the PSF profile at 12.5 µm is either spatially resolved or unresolved. The parameter p in any model has the same value for the two components of the outer disk, that is, the disk surface and the midplane. For p ≥0, that is, when most of the mass is located in the outer regions of the disk, the PSF profile at 12.5µm is systematically resolved (red-box values in Table B.1). For a negative power law, we find that an overly small outer radius R o does not produce a satisfactory fit of the SED. For large values of R o (e.g., 150 au) and a negative power law, the flux in the far-IR and submm range tends to be underestimated. A best fit for the SED is found for p=-1.5, R o =80 au, and R i =10 au with a non-reduced χ 2 value of 302 (or χ 2 r =1.9 for N-ν=157). This model, for which the 12.5 µm PSF is unresolved, is then chosen as the reference baseline model for characterizing the PSF profiles in the PAH filters. We highlight that the approach adopted to isolate a disk model for further analysis holds some limitation: For the outer disk, other parameters may influence the result, such as, for example, the flaring index β not included in the minimization process. Nevertheless, during the search for a reference baseline model, we examined the influence of the flaring index and found that as soon as β reaches 2/7, the infrared excess at 5 µm is overestimated. This effect cannot be satisfactorily compensated by a reduction of the disk's mass, which would then result in an emission deficit in the mid-IR and sub-mm ranges. The value of β=1/7 remains conservative in comparison to values in the Table 2. literature. Figure 4 presents the resulting SED overplotted with the observational one, which show very good agreement over the whole spectral range. Analysis of the PSF profiles The best-model parametrized in Table 7 is used to produce synthetic images at 8.6 µm, 11.3 µm, and 12.5 µm that are then convolved with the GTC PSF, and for which profiles are compared to our observations. Figure 5 presents the result of the comparison in the three different filters. The size of each symbol corresponds to roughly the 3-σ error on the mean of each radial point of the PSF. We observe experimentally that the observed science PSF profile (black empty circles) is spatially resolved with respect to the calibrator PSF (red empty squares) between ∼0.1 and 0.3 ′′ . This is also clearly repeatable when using the two nearby calibrator stars (see caption). This is observed in the two PAH filters and the continuous vertical lines correspond to the FWHM values established in Table 2 for the science and calibration targets. On the contrary, the observed profiles in the Si-6 filter at 12.5 µm do not show a detectable difference in their FWHM, suggesting that the disk's extended emission is not resolved by CanariCam in the thermal continuum. We note that this comparison for the Si-6 filter is made only with the PSF calibrator second in time, as the first one suffered from poorer observing conditions. We also observe that the difference in FWHM between the science and the calibrator PSFs is more prominent in the PAH-1 filter than in the PAH-2 filter. In order to better understand the spatial properties of the thermal emission in different bands, we compare the synthetic PSF profiles derived from the best-fit model of Table 7 with our observations (blue dash-line). We clearly see that this model of thermal emission has the same profile as the PSF calibrator and is likely unresolved in our three observing bands. It is possible that another source of emission needs to be invoked to explain our observations. 6. Discussion Spatially resolved PAH circumstellar emission Our measurements show spatially resolved emission in the PAH-1 and PAH-2 filters centered on the infrared emission bands (IEBs) at 8.6 and 11.3 µm. These correspond to the two most prominent PAH bands in the 8-13 µm region of the spectrum of HD 179218. The bulk of the emission detectable by our obser- Fig. 6. Modeling of the science PSF profile (blue filled large dots) in the PAH-1 (left) and PAH-2 (right) filter with our hybrid model based on a Gaussian disk (black dotted line) and on the uniform disk (continuous red line) models, respectively. The insets show a close view on the wings of the PSF. vations has a spatial extent of ∼12 to 15 au in radius, assuming d=293 pc. No disk emission is resolved in the 12.5 µm filter, where the emission is dominated by the dust thermal continuum according to the spectrum . Advantageously, the angular diameter of the derived Gaussian disk model of HD 179218 can be compared to other existing interferometric measurements using the same model. Our Gaussian FWHM of 95±6 mas and 101±7 mas in the PAH-1 and PAH-2 filters, respectively, is larger than the Gaussian FWHM of 80±3 mas at 10.7 µm (∆λ=1.45 µm) measured for the disk's thermal continuum by Monnier et al. (2009) using aperture masking. Nulling interferometry measurements by Liu et al. (2007) revealed a Gaussian FWHM of 81±16 mas at 10.6 µm, although over a wide 50% bandpass encompassing the whole N band. Finally, our upper limit measured in the Si-6 filter appears very coherent with the Gaussian FWHM of 0.034 ′′ ±3% estimated by Leinert et al. (2004) at 12.5 µm using MIDI/VLTI. Previous single-aperture mid-infrared imaging observations did not resolve at 11.6 µm (∆λ = 1.1 µm, Mariñas et al. (2011)) and in the Q band (Mariñas et al. 2011;Honda et al. 2015). These observations were however conducted with Gemini and Subaru, which deliver intrinsically poorer spatial resolution than the GTC by a factor ∼1.2 and were not done in the PAH filters. In a second step, our radiative transfer modeling helped to investigate the nature of the resolved emission. Fitting both the SED data and our imaging data with a model containing a passive disk suggests that the dust thermal emission alone would not be resolved by our CanariCam observations. We suggest that the detected resolved emission is caused by PAH molecules UV-excited by the central star and located near the surface of the flared disk. An interesting comparison can be advanced here with HD 97048, a notable Herbig star for which the PAH emission at 8.6 and 11.3 µm is resolved out to several tens of astronomical units (Lagage et al. 2006). Doucet et al. (2007) estimated the disk diameter D d of HD 97048 -in the sense of the quadratic subtraction used above -to be ∼40 au at a distance of 180 pc, which is slightly larger than for HD 179218. However, HD 97048 being about two times closer, the disk is better resolved in its outer regions. Despite a relatively narrow science PSF core (about 1.2 times larger than the PSF calibrator core), a large amount of PAH emission is found in the wings of HD 97048's PSF out to 380 au in radius when comparing it with the emission in the immediate nearby continuum (SIV filter at 10.49 µm, see Fig. 3 in Doucet et al. (2007)). Similarly, in the case of HD 179218, the analysis presented in Sect. 4 may only provide the characteristic size of the disk's emission, but does not allow to constrain the true physical spatial extent of the PAH emission with respect to the continuum emission. For this purpose, it is necessary to account for the flux ratio between the PAH and continuum components in each filter. This can be estimated by subtracting from our photometric measurements in Table 4 the flux density of the PAH contribution at 8.6 and 11.3 µm as estimated by Juhász et al. (2010, Table 9). We derive a continuum flux density of 11.4 Jy and 19.1 Jy for a corresponding PAH flux density of 3.9 Jy and 2.9 Jy, respectively at 8.6 and 11.3 µm. We then build a hybrid model based on the image of the HD 179218's system in the continuum obtained by radiative transfer simulations (cf. Sect. 5) to which we superimpose a geometrical model simulating the PAH brightness distribution. The relative flux density of each component is scaled accordingly. For the PAH emission component, we investigated a Gaussian model and uniform disk (UD) modified model with radius R UD modulated by a radial power law r p . Using the former model, we are able to fit the core of the science PSF but we underestimate the emission in its wings in particular at 8.6 µm, whereas the smoother radial profile of the latter UD model better reproduces the full science PSF profile: As presented in Fig. 6, we find that in both filters a classical UD model with p=0, R UD =0.3 ′′ (87 au at 293 pc) and scaled to the corresponding PAHs flux density successfully matches the science PSF profiles of HD 179218. This result shows that, despite the small characteristic size -in the sense of quadratic subtraction -of HD 179218's disk emission, the PAHs emission at 8.6 and 11.3µm extends comparatively out to larger radii, which suggests a spatial scenario of the disk's emission similar to HD 97048. Charge state of PAHs in HD 179218 Our results can be replaced in the larger context of the study of PAH emission in Herbig stars' disks. When looking at the question of PAHs spatial extent and charge state (ionized vs. neutral) in more detail, the recent studies by Maaskant et al. (2013Maaskant et al. ( , 2014 highlight more complex scenarios. It is found from four typical Herbig objects that the bulk of the PAH emission in (pre)transitional Herbig systems can originate either in the inner (optically thin) or the outer (optically thick) region of the protoplanetary disk. Observationally, this results in a PAH emission component with a smaller, comparable or larger characteristic size than the thermal continuum emission. Two representative cases of, respectively, compact and extended emission are seen in IRS 48 (Geers et al. 2007) and HD 97048 (Lagage et al. 2006). Interestingly, Maaskant et al. (2014) suggests a correlation between a) the relative spatial extent of the PAH emission with respect to the thermal continuum and b) the charge state of the PAH molecules as classically traced by diagnostics such as the I 6.2 /I 11.3 or I 3.3 /I 7.7 feature ratios -see Fig. 21 in Peeters et al. (2002) -or the relative strength of the 7.7+8.6 µm feature compared to the 11.3 µm feature. In the archetypical case of HD 97048, the PAH emission is found to be significantly more extended than the thermal continuum at both 8.6 and 11.3 µm (Lagage et al. 2006;Doucet et al. 2007;Maaskant et al. 2013). At the same time, the object's mid-IR spectroscopy is indicative of an emission caused predominantly by neutral PAH molecules traced by the strong 3.3 and 11.3 features (Seok & Li 2017;Maaskant et al. 2014). The opposite case is found with IRS 48 where the ionized state of PAHs, as suggested by a I 6.2 /I 11.3 ratio larger than unity, goes together with the 11.3-µm PAH emission size being more compact than the continuum emission (Maaskant et al. 2014). (2017), respectively. The spectra are normalized to the peak emission at 6.3 µm. Top: the small-dotted line corresponds to the total PAH emission, the dashed line to the emission from ionized PAHs in optically thin environments like the gap, and the large-dotted line corresponds to the emission from neutral PAHs in the optically thick disk. Bottom: the model for the total PAH emission in HD 179218 shows a prominent feature at 8.6 µm and a weaker feature at 11.3 µm. In HD 179218, different works based on ISO and Spitzer spectroscopy have reported and confirmed the relatively stronger 8.6µm PAH feature compared to the 11.3-µm one Acke et al. 2010;Juhász et al. 2010;Seok & Li 2017). Looking at the continuum-subtracted spectrum of HD 179218 modeled in Seok & Li (2017) and qualitatively comparing it to the one of IRS 48 indicates similarities between the two spectra in terms of strength of the 6.2 and 7.7-µm features compared to the 11.3-µm feature (see Fig. 7). At first, this could point to a scenario for HD 179218 similar to IRS 48 with predominantly ionized PAHs located, for example, in the inner optically thin gap. However, our mid-IR imaging results and emission modeling suggest that the PAH contribution is not confined to the inner 10 au region of HD 179218, but may extend out to the outer disk regions, where a neutral charge state of the PAHs may be favored. A possible explanation to this scenario is that the stellar luminosity of HD 179218 (L=180L ⊙ , Alecian et al. (2013) )), the central star consequently produces a stronger UV radiation field capable of ionizing PAH molecules out to larger distances. A plausible hypothesis could also be that PAH ionization out to large distances results from a wide-angle wind impinging on the disk surface. In a different context, this scenario is observed in higher-mass evolved Wolf-Rayet stars (Marchenko & Moffat 2017). Ideally, the wind scenario could be investigated with IR interferometry by resolving the spatial size of the Brγ emission line detected in HD 179218 (Garcia Lopez et al. 2006) and comparing it to the size of the nearby continuum, similarly to the case of MWC297, whose Brγ emission is driven by a disk-wind mechanism (Malbet et al. 2007). For example, in the case of HD 97048, recent GRAVITY interferometric observations revealed a Brγ emission more compact than the nearby continuum (K. Rousselet-Perraut, private communication) possibly indicative of a magnetospheric accretion process mechanism taking place in the very inner disk region (Kraus et al. 2008). Lacking a clear disk-wind mechanism could hence justify the survival and abundance of neutral PAH in that system. It is particularly interesting to note that, by tracing the ro-vibrational line of molecular H 2 at 2.12 µm in HD 97048's disk, Bary et al. (2008) conclude at a quiescent state of H 2 that is not shocked nor entrained in a fastmoving wind or outflow associated with this young source. An alternative disk radiative transfer model In this work, we also propose an alternative radiative transfer model to Dominik et al. (2003) for the disk of HD 179218 by fitting simultaneously the SED and our imaging data. Dust thermal emission and isotropic scattering are considered in our radiative transfer model, but PAH emission is not included. Keeping in mind the uncertainty on the object's parallax, our best model implies a larger outer disk than D03 with R o =80 au and, in particular, a negative power-law with p=-1.5 for the outer disk surface density. This is slightly lower than the p=-1 found for the outer disks of similar group Ia/b objects like HD 100546 (Tatulli et al. 2011) , HD 139614 (Matter et al. 2016 or AB Aur (di Folco et al. 2009). This is nonetheless opposite to the positive power-law (p=+2) derived by D03, and we find that any positive dust density law in which most of the mass is located in the outer regions of the disk would result in a surface brightness distribution that should be resolved by our observations at 12.5 µm. Conclusions We conducted mid-infrared imaging and spectroscopic observations of the Herbig star HD 179218 using CanariCam on the GTC and obtained the following results. -Helped by good weather conditions and by the format of CanariCam images into cubes of short duration savesets, we were able to obtain close to diffraction limited images of HD 179218, and among the sharpest N-band images obtained from the ground with a FWHMs of ∼210 mas at 8 µm. By re-centering and combining a large number of savesets, we reached 3σ uncertainties of less than 5 mas on the FWHM. With this potential, we resolve for the first time the circumstellar emission around HD 179218 in the PAH bands at 8.6 and 11.3 µm and found characteristic size of ∼100 mas in diameter. -We performed photometry of the system at 8.6, 11.3 and 12.5 µm and found values consistent with published flux densities and without noticeable variability within the measurement errors. The CanariCam low-resolution spectrum matches quite well the shape measured by Spitzer, except in the region of the Earth's ozone band where the spectral calibration is found to be unreliable. -Importantly, the combination of our imaging data with radiative transfer modeling suggests that the spatially resolved emission at 8.6 and 11.3 µm is not of thermal equilibrium nature but may originate from UV-excited PAH molecules located at the surface of the flared disk. By taking into account the relative flux ratios between the PAH and thermal component, we find that our observations are best reproduced with a model of PAH "disk" extending out to the physical limits of the dust disk model. -We discuss the compatibility of such a spatial scenario with the spectroscopic evidence that a predominant fraction of the PAH molecules might be in an ionized charge state. We suggest that a particularly strong UV radiation field from the star or a disk wind may ionize the PAH molecules out to the largest radii. -In contrast to the disk model already proposed by Dominik et al. (2003), our alternative radiative transfer model of HD 179218 coupled to mid-infrared imaging at 12.5 µm suggests a surface density with a p=-1.5 negative power-law index, with most of the dust mass located in the first 30 au of the outer disk. Assuming that the gas and the PAH molecules are strongly coupled, the detection of an extended PAH emission would point to a flared structure of the disk in HD 179218 and confirm earlier results. Cumulative FWHM for both reference stars (HD 169414 and HD 187642) and science star (HD 179218) in the PAH-1 (top), PAH-2 (center) and Si-6 (bottom) filters as a function of the number of co-added frames following the procedure described in Sect. 3.2. The dashed line is the average FWHM reported in Table 2 and following a Lorentzian fit, whereas the dotted lines correspond to the 3σ boundaries. Appendix B: Results of χ 2 minimization Table B.1 reports the value of the non-reduced χ 2 on the SED obtained during the search of our best radiative transfer model of the disk's thermal emission in HD 179218. The observational SED is plotted in Fig. 4. We compare a posteriori the synthetic and the observed PSF profiles at 12.5 µm. Non-reduced χ 2 table for the fit of the SED for different values of p in Eq. 3, R i and R o , respectively the inner and outer radius of the outer disk in HD 179218. The light and dark gray boxes correspond to models for which the 12.5 µm PSF is spatially unresolved and resolved, respectively. The best model is identified for the value χ 2 =302 (bold and italic).
2017-11-14T17:14:01.000Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "c338d4192b56dc2c4f803e6b10b336757f5c8da2", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2018/04/aa32008-17.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c338d4192b56dc2c4f803e6b10b336757f5c8da2", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
247055932
pes2o/s2orc
v3-fos-license
Prevalence of screen time use and its relationship with obesity, sleep quality, and parental knowledge of related guidelines: A study on children and adolescents attending Primary Healthcare Centers in the Makkah Region BACKGROUND: Since the use of handheld electronic devises is prevalent among people of all ages, health organizations have specified appropriate screen times for the different age groups. The aim of this study was to investigate the prevalence of screen use and its association with sleep quality and obesity. MATERIALS AND METHODS: This cross-sectional study was conducted on people attending three Primary Healthcare Centers in the Makkah region between January and October 2019. The three-part questionnaire filled by parents collected data on sociodemographics, parental knowledge of guidelines, and asleep quality. Data were analyzed using STATA 14.2. For continuous variables, groups were compared using t-test; Pearson Chi-squared test or Fisher's exact test, as appropriate, was employed for categorical variables. RESULTS: A total of 450 individuals completed the questionnaire. Children 2–12 years old spent more time and used phones, tablets, and television (TV) more frequently, while those younger than 2 or older than 12 used phones and TVs more than other devices. High body mass index was associated with the daily usage of electronic devices. Fewer hours of sleep, longer time to fall sleep, and longer hours in bed were associated with the usage of all electronic devices. Furthermore, a good knowledge of the maximum time allowed for children and teenagers and content scoring system was associated with hours slept per night, and low knowledge was associated with higher frequency of using electronic devices. CONCLUSION: Children spent long periods using electronic devices, and despite knowing the guidelines, parents still allowed their children to exceed the time acceptable for the use of electronic devices, which could lead to future social problems. Introduction P eople of all ages indulge in watching television (TV), playing on different consoles and in using handheld electronic devises. An estimate in the United States of America found that 60% of children younger than 8 years owned a smartphone and 40% owned a tablet device. [1] Another report estimated that 83% of children 6 years and below used a screen media device in a typical day. [2] Of these, 73% watched TV, videos, or digital video disks, 18% used computers, and 9% played video games. These numbers are higher among adolescents aged 12-19 according to a report which indicated that 83% of adolescents used a smart device every day. [3] A Mexican study found that of the devices used by children in households, smartphones accounted for 62.4% and desktop or laptops accounted for 60.9%. [4] Another study of children between 5 and 16 found that an average daily TV viewing exceeded 6 h. [5] The American Academy of Pediatrics recommends that parents should limit their children's total media time to no more than 1-2 h a day of genuine quality and under supervision. [6] It also recommends that children younger than 2 years should be discouraged from watching TV and that parents should generally watch TV with their children. Other health institutions like the Department of Health in the government of Australia also recommend that children should not have more than 2 h a day of electronic media and that preschoolers should be encouraged to be more active. [7] Other guidelines recommend limiting the use of sedentary electronic equipment to <2 h and with a break every 30 or 60 min. [8] Not all movies and games are suited for children since some do include references or scenes with sexual contents or drugs or crimes. Many governments have rating systems to help parents determine what their children should watch or play. For example, the Australian government classifies movies and games into five categories. These are general (G) which are suitable for everyone, parental guidance which are not recommended for children below 15 years without guidance from parents, mature (M) which are not suitable for those below age of 15, mature accompanied (MA) which are illegal for those below 15 to watch or play unless purchased by an adult guardian who is exercising parental control over the child, and finally restricted content (R+18), which is restricted for adults only. [9] Other governments such as American, Canadian, and European have similar classifications. [10][11][12] In Saudi Arabia, the average time spent on mobile devices is 2 h and 42 min, which is slightly above the average in the study of 10 countries across Europe and Middle East. The average age for ownership of devices is 6 years for tablets; the average age for games consoles connected to the internet is 7 years, for laptops and computers, it is 8 years, for smartphones, it is 9 years. Parents, however, believe that children should be older when they get their devices. Around 86% of Saudi parents are concerned that their children are exposed to explicit content on the internet, 83% fear their children might meet strangers online, 80% worry their children are spending too much time in front of the screen, and 76% are concerned their children might suffer online bullying. [13] Saudi Arabia has recently established its rating system for movies with the re-opening of cinema theaters in 2018 and substituted the American and European rating systems with its own rating systems for games in 2016. [14] The aim of this study was to investigate the prevalence of electronic devices used and the time children spend on the screen, and its association with sleep quality and obesity, and to investigate parental knowledge of guidelines and content rating systems related to the use of these devices. Materials and Methods This cross-sectional study was conducted between January and October 2019 in three primary healthcare centers of the Ministry of National Guards Health Affairs (MNGHA) in the Western area of Saudi Arabia. Ethical approval from the Institutional Review Board was obtained vide letter No. IRBC/2106/18 dated 13/12/2018, and informed written consent was taken from the parents of all participants in the study. The medical services of the MNGHA are composed of primary healthcare services scattered over Saudi Arabia along with medical cities and hospitals that provide more advanced services for its beneficiaries. The main population targeted were 18-year-old and younger male and female adolescents and children who attended the primary healthcare centers with both or one of their parents. The yearly average of our population attending the three primary healthcare centers exceeds 50,000 persons. This number was used to calculate the sample size needed for the study. With 95% confidence interval, and a 5% margin of error, the minimum required sample size as calculated was 375. Considering a 10% nonresponse rate, the final sample size was set at 450. We followed a quota sampling technique where 150 families were selected from each center. Data were collected by distributing a self-administered questionnaire composed of three parts to be completed by parents. The first part consisted of sociodemographic data and information about devices used. It included age, weight, height, gender, level of education, devices used, frequency of use of each device, time spent on each device. The second part assessed the parental knowledge of guidelines of recommended screen time for each age category and their knowledge of the content rating system. The third part was the Arabic version of the Pittsburgh Sleep Quality Index (PSQI) previously validated consisting of 19 items that assessed sleep quality in the last month. [15] It has 7 subjective components on sleep quality, sleep latency, sleep duration, sleep efficiency, sleep disturbance, use of sleep medication, and daytime dysfunction. The score for each component ranges from 0 (no difficulty) to 3 (severe difficulty). The total score ranges from 0 to 21, the higher scores indicating worse sleep quality. Statistical analysis was conducted using Stata Statistical Software: Release 14 (2015) by StataCorp. College Station, TX, USA. Continuous variables were presented as mean and standard deviation (SD) and inter-group differences were compared using t-test. Skewed numerical data were presented as median and average rank and between-group differences were compared using the Mann-Whitney U-test. Paired numerical data were compared using the paired t-test. Categorical variables were presented as number and percentage, and differences between groups were compared using the Pearson Chi-squared test or Fisher's exact test. Ordinal data were compared using the Chi-squared test for trend. P < 0.05 was considered statistically significant. Results Demographics show that in our study, 174 of the participants were males and 276 were females in a total of 450 participants. The number of toddlers aged <2 years old was 113 (25.1%), children between the ages of 2-6 year were 93 (20.7%), children between the ages of 6 and 12 years were 101 (22.4%), and adolescents aged >12 years old were 143 (31.8%). There was statistically significant difference in patterns of using electronic devices as infants <2 years never used laptops, computers, video games, nor tablets except rarely while teenagers used tablets, video games, and tablets most frequently with P value 0.0001, 0.0001, and 0.0001, respectively. Regarding body mass index (BMI) for our respondents, the mean was 20.4 ± SD 6.6 and a median of 18.5 ranging from 10.5 to 54.3 for all respondents. Tables 3 and 4, BMI was significantly correlated to the frequency of using electronic devices as the highest BMI was associated with daily usage of electronic devices with P = 0.0001. Furthermore, higher BMI was significantly associated with the use of computers and laptops, with P = 0.01 and using phones with P = 0.0001. As shown in With regard to parental knowledge, 348 (77.3%) of the parents had heard about guidelines regarding how much screen time children should have and 102 (22.7%) of the parents had not. Three hundred fifty four (78.7%) parents had heard about content rating systems regarding the appropriateness of games or videos for children, but 96 (21.3%) of the parents had not. Parental knowledge did not have any significant correlation with sleep quality and time spent in bed, but it had significant correlation with total sleep hours. The mean hours of sleep by children was 8.4 ± 0.1 (P = 0.01). Good knowledge of the maximum time allowed for children and teenagers and content scoring system was significantly correlated with hours slept per night, and surprisingly, those with high knowledge had fewer sleeping hours of 7.7 ± 0.2 (8.4 ± 0.1 for those with less knowledge) with P = 0.01. Low knowledge of the guidelines on using electronic devices was significantly associated with higher frequency in using electronic devices such as computers, video games, and phones with P value of 0.0001, 0.02, and 0.002, respectively. Respondents had a mean time of 16.6 ± 10 min before sleep. Furthermore, they had a mean time of 8.6 ± 2.7 h in bed. However, the mean hours slept were 8.2 ± 2.6. Regarding sleep quality and PSQI, sleep interruptions of the respondents during their last month were infrequent, however, 14.2% could not initiate sleep within 30 min more than 3 times weekly, 12.4% woke up in the middle of night or early morning, the majority (8.9%) woke up because of a bad dream, 6.4% because they were cold, and 6% had to go to the bathroom. Most respondents had no problem in keeping up with doing things enthusiastically (79.1%), and only 2.4% had huge problems. In addition, 66.4% rated their sleep quality as very good, 23.1% rated as fairly good, 4.2% as fairly bad, and 6.2% rated as very bad. Regarding problems faced by roommate, disorientation and confusion episodes while sleeping commonly occurred in 6.4%, loud snoring in 5.3%, and restlessness during sleep in 2.2%. Table 5, using all types of electronic devices were significantly associated with fewer hours slept, longer time before sleep and more hours spent in bed with P < 0.05 except for phones with time in bed and hours slept, and video games consoles with hours slept. Using PSQI, the scores from 0 to 10 were categorized as low and high if above 10. The minimum score was 0 and the maximum was 13 with a mean of 2.9 (±2.8). As shown in PSQI was significantly correlated, as shown in Table 6, with using computers, tablets, and TV with P value of 0.0001, 0.0001, and 0.04, respectively. Discussion This study assessed the prevalence of the use of electronic devices used and the time spent on them, and the association with weight and sleep quality as well as parental knowledge of the time recommended by the guidelines and rating system of games and movies. Out of 450 included in the study, 31.8% were above the age of 12 and 61.3% of them were female. The study found that children from 2 to 12 years old spent more time and used phones, tablets, and TV more frequently, while those younger than 2 or older than 12 used phones and TV more than other devices. It also found that phones and TV were used for more than 2 h each day. Moreover, high BMI was associated with daily use of electronic devices. It also found that fewer hours of sleep, longer time it takes to fall asleep, and longer hours of time in bed were associated with the usage of all electronic devices. Furthermore, a good knowledge of the maximum time allowed for children and teenagers to use devices and the content scoring system was significantly associated with the hours slept per night, and low knowledge was associated with higher frequency of using electronic devices. Our study found that high BMI was associated with daily usage of electronic devices. The frequency of using phones and computers and laptops had a significant association with BMI. A study published in 2015 in Canada found that exceeding 2 h of screen time was associated with higher weight and waist circumference. [16] Moreover, a study in China found out that higher screen time was an independent risk factor for being overweight or obese. [17] In addition, a meta-analysis of 16 studies conducted in 2019 showed that spending more than 2 h on screen was associated with childhood overweight or obesity and that the association in the separated screen time, such as using a TV or computers, was more obvious than when total screen time is taken into account. [18] Our study also found that fewer hours of sleep, longer time it takes to fall asleep, and longer hours of time spent in bed were associated with usage of all electronic devices. Other studies had the same findings associated with electronic device usage on sleep patterns. One study published in 2018 in the United States found that digital screen time was associated negatively with sleep duration. [19] Another study conducted in 2019 found that children between 2-5 and 6-10 years old who spent 4 h or more per day on portable devices were twice as likely to get insufficient sleep as individuals who spent no time on portable devices; and those who were 11-13 years old were 57% more likely to have sleep insufficiency if they spent 4 h on portable devices. [20] Moreover, a study conducted in some European countries in 2018 found out that adolescents from 10 to 19 years old who exceeded 2 h of screen usage had 20% higher odds of reporting sleep-onset difficulties. [21] Another study done in Brazil found that phones were associated with delayed bedtime and shortened sleep duration. [22] Regarding knowledge assessment for parents, the majority of responders had adequate knowledge on how much screen time children could have and the content rating systems as 77.3% knew the maximum time allowed for children to use electronic devices, and 78.7% had knowledge of the content scoring system. However, a high percentage of the children in the study used phones and TV daily, and many used them for more than 2 h each day, and they had fewer hours of sleep. One study found that children whose parents set rules for TV time were less likely to exceed recommended screen time limits. [23] Other studies found that parental screen time practices had an influence on children's screen time use and that limiting screen time was effective in preventing overweight. It also found that any interventions to reduce screen time should involve both parents. [24,25] Conclusion Excessive screen time use has a negative impact on children and adolescents. Different studies found out that screen time was associated with other health issues not evaluated in our study, such as issues with vision, physical discomfort, depression, attention deficit/ hyperactivity disorder, and antisocial behaviors. [26][27][28][29][30] We recommend that these issues should be examined in future studies in our population. Moreover, this study was conducted before the COVID-19 pandemic and a comparative study might provide different results. Furthermore, a different approach to collecting data should be considered as we had difficulty in data collection for this research as parents thought that the completion of the questionnaire was too time consuming. Finally, conducting this study in a restricted military hospital and the length of the questionnaire were the main limitations to the use of a larger sample. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-01-24T14:47:53.687Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1f326233848c6615291024a57d3efa1d3c900397", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6d92efe9e70a7843a7e47b97130a074ef18c20f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252348252
pes2o/s2orc
v3-fos-license
Prevalence of Pseudohypoparathyroidism and Nonsurgical Hypoparathyroidism in Japan in 2017: A Nationwide Survey Background Pseudohypoparathyroidism (PHP) and nonsurgical hypoparathyroidism (NS-HypoPT) are rare diseases with hypocalcemia, hyperphosphatemia, and high and low parathyroid hormone levels, respectively. In Japan, over 20 years have passed since the last survey on these diseases. We carried out a nationwide cross-sectional survey to estimate the prevalence of these diseases in 2018. Methods We conducted a nationwide mail-based survey targeting hospitals in 2018. From a total of 13,156 departments throughout Japan, including internal medicine, pediatrics, neurology, and psychiatry, 3,501 (27%) departments were selected using a stratified random sampling method. We asked each included department to report the number of patients with PHP and NS-HypoPT in 2017. Results The overall survey response rate was 52.0% (1,807 departments). The estimated number of patients with PHP and NS-HypoPT was 1,484 (95% confidence interval [CI], 1,143–1,825) and 2,304 (95% CI, 1,189–3,419), respectively; the prevalence per 100,000 population was 1.2 and 1.8, respectively. Conclusion In this study, we generated estimates of the national prevalence of PHP and NS-HypoPT in Japan during 2017, which were found to be higher than those previously reported. INTRODUCTION Pseudohypoparathyroidism (PHP) is defined as target organ resistance to parathyroid hormone (PTH), which results in hypocalcemia and hyperphosphatemia.PHP is clinically divided into PHP1A with Albright hereditary osteodystrophy (AHO), characterized by short stature, round face, obesity, brachydactyly, and heterotopic ossification, and PHP1B without AHO. 1 PHP is caused by molecular defects that impair hormonal signaling via receptors that are coupled, through the alpha subunit of the stimulatory G protein (Gsalpha), to activation of adenylyl cyclase. 2 Pseudopseudohypoparathyroidism (PPHP), progressive osseous heteroplasia (POH), and acrodysostosis are also disorders with impairments in the PTH and/or PTHrP cAMP-mediated pathway. 3][4] Hypoparathyroidism is characterized by hypocalcemia owing to insufficient PTH secretion.Other than post-surgical hypoparathyroidism, idiopathic hypoparathyroidism was most frequent, even though a number of genetic causes of impaired PTH secretion have been identified.Additionally, there are many cases of unknown etiology. 5HP and nonsurgical hypoparathyroidism (NS-HypoPT) are rare and are categorized as intractable diseases by the Ministry of Health, Labour and Welfare in Japan.An epidemiological survey on PHP and NS-HypoPT was conducted in Japan during 1997. 6he diagnostic criteria at that time are shown in eTable 1.Since then, molecular diagnostic methods have progressed and awareness of the disease has increased. 7,8For these reasons, we believe that reassessment of the epidemiological information is needed.The Research Committee on Epidemiology of Intractable Diseases (Chairperson: Yosikazu Nakamura) and the Hormone Receptor Abnormality Research Committee (Chairperson: Takashi Akamizu), sponsored by the Ministry of Health, Labour and Welfare of the Japanese government, jointly conducted a nationwide survey of PHP and NS-HypoPT in Japan during 2018. The purpose of the present study was to determine the current number of patients with PHP and PHP-related diseases, as well as NS-HypoPT, in Japan, and to determine their clinical and epidemiological characteristics.In this report, we discuss only the prevalence from the first survey. METHODS This nationwide survey was carried out using a protocol for epidemiological research on intractable diseases.This protocol was created by the study group of Epidemiological Research of Intractable Diseases Japan and was developed based on the concept that patients with intractable diseases tend to go to larger hospitals.Therefore, in this method, the target facilities were extracted by stratification according to the scale of the hospital and the clinical department, and the extraction rate was changed for each stratum to extract the survey facilities. 6,9arget diseases examined in the survey were PHP, PPHP, POH, acrodysostosis, and NS-HypoPT.The target clinical departments in the previous survey were pediatrics, neurology, internal medicine, and endocrinology.However, given the fact that endocrinology was not incorporated into the list of clinical departments in this study, the four clinical departments in the present survey included pediatrics, internal medicine, neurology, and neuropsychiatry. The survey period was the full calendar year in 2017.The selection rate was 100% for hospitals with 500 beds or more and university hospitals, 80% for hospitals with 400 to 499 beds, 40% for hospitals with 300 to 399 beds, 20% for hospitals with 200 to 299 beds, and 10% for hospitals with 100 to 199 beds; only 5% of hospitals with fewer than 100 beds were selected at random.In addition to the four abovementioned departments, we designated pediatric specialty facilities as a special department.We then sent a questionnaire with diagnostic guidelines to all selected study departments (Table 1). As a first survey, in February 2018, we inquired about the presence of patients with the diseases of interest and the number of cases in hospitals that were extracted using the above method.In August 2018, we submitted a second request to complete the first survey to hospitals that did not respond to our initial request. During a secondary survey in October 2018, we distributed an individual survey form to facilities that reported having patients with the target diseases in the first survey. The study protocol was approved by the Ethics Committee of Chiba University School of Medicine (approval number: 2940). Estimation of the number of patients In consideration of the selection and response rate in the first survey, we estimated the total number of patients with PHP and NS-HypoPT.The total number of patients in Japan during the study period was calculated using the following formula: Estimated total number of patients ¼ number of reported patients=ðsampling proportion  response proportionÞ ¼ number of reported patients =ðnumber of departments that responded=number of departments in JapanÞ: Prevalence of Pseudohypoparathyroidism and Nonsurgical Hypoparathyroidism in Japan The numbers of patients for each stratum were summed.[12][13][14] RESULTS Table 2 shows the number of sampled and responding departments, per medical department type, and the number of patients by department.From a total of 13,156 departments comprising internal medicine, pediatrics, neurology, and psychiatry throughout Japan, 3,501 (27%) study departments were selected at random.Of the departments that received the first questionnaire, 1,807 responded; the response rate was 52%. The responding departments reported 478 patients with PHP and 704 patients with NS-HypoPT who visited hospitals in 2017 (Table 3).The details of patients with PHP were: 252 (53%) patients from university hospitals, 112 (23%) from hospitals with 500 beds or more, 43 (9%) from hospitals with 400 to 409 beds, and 35 (8%) from pediatric specialty facilities.There were 36 (7%) patients from other facilities.The details of patients with NS-HypoPT 330 (47%) patients from university hospitals, 190 (27%) from hospitals with 500 beds or more, 71 (10%) from hospitals with 400 to 409 beds, and 51 (7%) from pediatric specialty facilities.There were 62 (9%) patients from other facilities.In this way, many of these patients were reported from university hospitals or larger hospitals.For other diseases, 19 patients were reported with PPHP, 5 with POH, and 7 with acrodysostosis. Table 4 and Table 5 show the numbers of patients with PHP and NS-HypoPT, estimated statistically on the basis of the values obtained from the extracted samples.The number of patients with PHP was estimated to be 1,480, whereas the number with NS-hypoPT was estimated as 2,300.From these estimates and the estimated 2017 Japanese population of 124.48 million, the prevalence of PHP and NS-HypoPT was 1.2 and 1.8 per 100,000 inhabitants, respectively. The distribution by disease, age, and sex of patients from the second survey is shown in Table 6.Of the 363 cases reported with NS-HypoPT, 195 (54%) were male, whereas 105 (57%) of the 241 cases reported with PHP were female.NS-hypoPT occurred at a similar rate across all age groups while PHP was concentrated in patients under the age of 60 years.Table 7 presents details of the clinical diagnosis of NS-hypoPH.Idiopathic hypoparathyroidism and DiGeorge syndrome accounted for the majority of patients. DISCUSSION In this study, we estimated that the prevalence rate per 100,000 individuals for PHP and NS-HypoPT was 1.2 and 1.8, respectively.The present study was preceded by a study in 1998 on PHP and NS-HypoPT.Nakamura et al reported the results of a previous survey, with a prevalence per 100,000 for PHP and NS-HypoPT of 0.34 and 0.72, respectively. 6Compared with results of the previous survey, the prevalence per 100,000 inhabitants was higher for both PHP and NS-HypoPT in our study.The difference in department selection versus the previous study may have affected results.Although psychiatry, which was not included in the previous survey, was included in the present survey, almost no cases were reported; therefore, we considered that this inclusion had no effect on the prevalence (eTable 3 and eTable 4).In this study, four clinical departments were targeted and the possibility of duplication cannot be denied.However, identical cases were not observed in the personal survey table of the secondary survey; thus, we assumed no significant impact.Consistent with the findings of the previous survey, Ns-hypoPT patients were predominantly male, while PHP patients were largely female (Table 6).Age distribution was similar to that in the previous survey, with PHP more prevalent than NS-hypoPT in the younger generation and NS-hypoPT relatively common among men aged 50 years and above.In actuality, the reason for the increase in the number of patients with both diseases is not clear.This may be related to problems of recognition, as well as a true increase.The diagnostic rate may have increased because awareness about PHP and NS-HypoPT has increased and molecular genetic diagnostic methods have advanced. 15,16PHP may be suspected in patients who present with AHO and hypothyroidism, and PTH resistance has been found before the onset of hypocalcemia. 15There have also been reports of cases in which DiGeorge syndrome was genetically diagnosed, as distinguished from other symptoms, and hypoparathyroidism was diagnosed as a result of close examination. 16here is little epidemiological information from other countries regarding either PHP or NS-HypoPT.In 2016, Underbjerg et al reported a prevalence of PHP was 1.1 per 100,000 in Denmark and Astor et al reported that the prevalence of PHP was 0.82 per 100,000 in Norway. 17,18Regarding NS-HypoPT, Underbjerg et al reported that the prevalence of NS-HypoPT was 2.3 per 100,000 in Denmark in 2015. 5In 2016, Astor et al reported that the prevalence of NS-HypoPT was 0.78 per 100,000 in Norway. 17he prevalence in our study was similar to that reported in Denmark in 2016 and 2015 for both PHP and NS-HypoPT (Table 8). There are several limitations to this study.First, because patients in clinics with fewer than 20 beds were not included, the prevalence may be underestimated.Despite this, it is likely that patients with both diseases tend to be treated in large, highly specialized hospitals and were generally covered by our survey.Second, the estimated prevalence was calculated on the assumption that the prevalence of PHP and NS-HypoPT is the same in those hospitals that did not respond to our survey.Hospitals with no cases may not even reply.Therefore, hospitals that responded may be biased toward those that have patients with the diseases of interest.In internal medicine, cases from smaller hospitals with a low sampling rate affect the total estimated number of patients and widen the 95% confidence interval, undeniably influencing the overestimate (eTable 3 and eTable 4).From the above, we may have overestimated the number of patients.Finally, the diagnoses were completely dependent on the attending physicians and were not confirmed using biochemical data. In conclusion, we determined the prevalence of PHP and NS-HypoPT in Japan in 2017.The overall prevalence rate per 100,000 individuals for PHP and NS-HypoPT was 1.2 and 1.8 in that year, respectively.These estimates are higher than those in 1997, suggesting increasing disease burden. Table 3 . Number of patients reported by departments Table 4 . Estimated numbers of patients with pseudohypoparathyroidism (PHP) in Japan in 2017 Totals were calculated with significant figures up to the tenth decimal place and rounded to the nearest whole number. a Table 5 . Estimated numbers of patients with nonsurgical hypoparathyroidism (NS-hypoPT) in Japan in 2017 a Totals were calculated with significant figures up to the tenth decimal place and rounded to the nearest whole number. Table 6 . Age distribution by disease and sex Table 7 . Number of cases with each disease type among cases with nonsurgical hypoparathyroidism (NS-hypoPT) in Japan
2022-09-18T15:15:48.195Z
2022-09-17T00:00:00.000
{ "year": 2022, "sha1": "995c15edf22126b78add5c895616414bfa5f681a", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jea/advpub/0/advpub_JE20220152/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bf6157b1517b83bf6db7b400f2597050c2bd28d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
149721265
pes2o/s2orc
v3-fos-license
The impact of mathematical competences and cognitive effort on the appearance of the framing effect Abstract The aim of this paper is to check whether mathematical competences influence some manifestations of bounded rationality. A special example of bounded rationality called “framing effect” is dealt with to analyze empirically the thesis that mathematical competences and cognitive effort may reduce the framing effect. Two kinds of cognitive effort: probabilistic and deductive are analysed. Experiments were conducted using samples of Polish students, both mathematically and business oriented. As an example of a framing situation an example called “Asian disease”, (the first analyzed and the most popular example of the framing effect), is considered. The thesis that a mathematical background may diminish the occurrence of the framing effect was partly confirmed. Introduction The aim of this paper is to check whether mathematical competences impact some of the manifestations of bounded rationality. A special example of bounded rationality called "framing effect, " is considered and an analysis is made empirically of the thesis that mathematical competences and cognitive effort may reduce the framing effect. 65 A. Kaczmarek, K. Przybyszewski, D. Rutkowska, H. Sosnowska, The impact Two experiments were conducted using students of Warsaw universities who possessed different mathematical competences. Differences of mathematical levels in any scale were not measured but they were diagnosed in considering the students' majors. Firstly, because matriculation in mathematics was not obligatory at that time. Secondly, students who studied mathematics at various levels during their studies were considered and this fact significantly influenced their mathematical competences depending on the type of studies. Other characteristics were randomized because groups were based on university lists such as alphabetical order or university number. In the case of students with high mathematical competences these were based on an interest in subject of mathematics and quantitative methods. Experiments were conducted in similar circumstances such as a period of time, a part of a day, a part of a semester. Students displayed similarity in their life style and approach to studies. Mathematical competences were the only significant difference. Other characteristics may play a role in case of some individuals, but they constitute a small percentage as in any randomized group. In the first experiment students of Warsaw School of Economics with high and low mathematical backgrounds were compared. Both groups answered questions on Asian disease and a part of them were stimulated by probabilistic cognitive effort. Results showed that mathematical competences only partly diminished the framing effect (in case of loss). The conclusion was drawn that our respondents did not differ sufficiently in terms of their level of mathematical competence and we conducted the second experiment In this experiment students of mathematics and physics were compared with students of management. The results confirmed the results of the first experiment (that mathematical competences diminished the framing effect in case of loss). The impact of the mathematical background was greater than in the first experiment. The paper is organized as follows. In Section 1 bounded rationality as the foundation of the framing effect is presented. The Asian disease experiment is explained in section 2. Hypotheses are presented in Section 3. The first experiment (Sosnowska, 2013) is analyzed in Section 4, the second experiment (Kaczmarek, 2015) in Section 5. The paper ends with conclusions. Bounded rationality Making a decision under risk, especially in the domain of social affairs, is quite often a trade-off between being economically rational and socially acceptable. A decision-maker may employ rational models of analysis, i.e., expected value, but in the real world such a 'ruthless' approach to the decisions may not find wide support amongst the public. Arguably most real decision makers are perfectly aware of this. Beach and Lipshitz (1996) describe it as an "open situation" with some unpredictable social risks involved as opposed to the "closed" ones with all the consequences well-specified and not going beyond the matter to be decided upon. For instance a decision maker who is to decide whether to treat patient A or B with a scarce medication should focus on the chances for recovery, however he/she would probably take into account some moral considerations (about who is to be saved, etc.). The concept that explains this theoretical standpoint is Simon's bounded rationality. In an objectively defined world agents could be perfectly, i.e., economically, rational, yet in the real world the understanding of the dilemma to be resolved is by all means a subjective one, based upon the decision-maker's own personal goals, not necessarily consistent with the expected value maximizing principle. The above concerns both micro and macroeconomics (Kowalski, 2002). Being aware of the consequences that go beyond the decision scenario, or focusing on non-economic aspects of the dilemma may lead to tagging the decision as moral, ethical, religious, or personal vs. impersonal. As will be described later the tag attached to the decision scenario may lead to major changes in the nature of the decision-making process: the motives and goals are different, the amount of effort varies and finally, different decision rules are employed. The framing effect may be treated as a part of dual brain processing. Let us study a short framework for framing studies. The problem of different risk preferences in the domains of gains and losses was addressed by Markowitz (1952). For large outcomes people are risk averse in the domain of gains and risk seeking in the domain of losses, which was later named as the "framing effect". However Markowitz proposed that for the small outcomes the reverse framing effect is expected -risk seeking in small gains and risk aversion in small losses. Obviously for the "Asian disease scenario" when massive (non-financial) outcomes are presented, the Markowitz theory and the Prospect theory suggest the same pattern of risk preferences. The contemporary approach to this problem, called the "framing effect" has its beginning in Tversky and Kahneman's (1981) research. The framing effect may be described as the breaking of the invariance principle caused by putting a decision-maker in the domain of gain or loss. Logically the risk preference should be stable, however in the domain of loss people tend to seek risk, while in the domain of gain they chose the certain option. The framing effect is, superficially, a well-documented bias in risky decisions (see Kühberger, 1998;Kühberger, Schulte-Mecklenbeck, & Perner, 1999). Regardless of this little is known about the nature of the processes that lead to its occurrence. It is especially not clear whether the effect is produced by the lack of or excess of thought. Some of the cognitive biases and errors are clearly categorized as produced by shallow thoughts while others result from deep, yet erroneous thinking. The results of the framing studies, however, do not allow such a strong statement to be made. The amount of cognitive effort involved in the processing is said to reduce the effect (Guo, Trueblood, & Diederich, 2017), although some empirical results demonstrate the opposite -thoughtful processing generates the effect (Gonzales, Dana, Kosino, & Justa, 2005;Igou & Bless, 2007;Svenson & Benson, 1993). Asian disease as a classical scenario for framing studies Since 1981 the framing effects (Tversky & Kahneman, 1981) have been studied extensively, most often in connection with the "Asian disease" problem: the scenario in which a deadly disease endangers the lives of 600 inhabitants of a certain town. The task is to choose between two alternative rescue programmes, either certain or risky, which are described (framed) either positively or negatively, but equal in their expected value. Positively framed subjects choose between: (A) saving 200 people for sure and (B) saving 600 people with a one-third probability and a two-thirds probability that no people will be saved. Negatively framed subjects choose between: (A') certain death of 400 people and (B') a one third probability that nobody will die and a two-thirds probability that all 600 people will die. The framing effect shows itself in violation of the invariance principle, i.e., choosing a risky gamble (B') over a certain thing (A'), when the descriptors are negative (i.e., in loss domain, 78% chose B') and sure option (A) over a gamble (B) when the descriptors are positive (i.e., in gain domain, 72% chose A). The description of this bias is found in the Prospect Theory (Kahneman & Tversky, 1979) which states that people have an s-shaped value-function, concave for gains (which makes people risk-averse) and convex for losses (which makes people risk-seeking). Each prospect is evaluated (i.e., moved up or down around the reference point) as gain or loss and choices are made correspondingly. The effect seems not to be equal in size and strength under different conditions. There can be found some systematic individual differences in susceptibility to the framed choices also. Most of the studies were aimed at finding the most efficient methods for de-biasing the choices by increasing the amount of cognitive effort invested in the decision making process. This approach is based upon the assumption that the effect stems from reflexive or automatic processes and it can be overcome by thorough, motivated thought. The first stream of research focuses on various techniques of increasing the amount of thought -by making the participants accountable for the results of the decision, typically by either informing them that they would be asked to write a justification or actually writing it (Takemura, 1994). Takemura's study supported this method, however the results obtained by Sieck and Yates (1997) show that only getting the subjects to write the account removes the effect, which may mean that it is not only the motivation but also a 're-framing' of the scenario that reduces the effect. Contrary to those findings, in the study by Igou and Bless (2007), with the manipulation of the importance of the choice ('serious' vs. 'pilot' study tags) the effect disappeared in participants held accountable in a study tagged as a 'pilot' study, but was present in the one representing a 'serious' condition. The possible reason for the inconsistency of the results may be the spontaneous emergence of different goals the decisionmakers may pursue in the process: logical correctness or creating a compelling narrative explaining the choice they made. This idea was tested in a set of studies where the goal was presented to the participants by tagging the problem as 'medical' (i.e., pertaining to the moral/ethical domain) or 'statistical' (clearly aimed at logical correctness). In the study by Igou and Bless (2007) where the participants were to solve either a statistical problem or a medical problem the framing effect was obtained only in the group of participants solving the medical problem. The authors claim that the framing effect is produced by constructive information processing in the course of which affectively vivid but non-diagnostic cues (e.g., words as 'die' and 'save') start to have an impact on the choices, possibly because they lead to moral considerations about the consequences of the decisions. In the 'statistical' condition the diagnostic data are contained in the numerical values which naturally draws decision-makers' attention to the calculations and expected value of the options. Another stream of studies is based upon seeking individuals' mental traits and capabilities that prevent them from making biased decisions. The qualities of the mind suspected as playing a role in the decision making are either the natural need to make reflective and difficult choices or mathematical literacy. Indeed in the study by Simon, Fagley and Halleran (2004) people with a high level of the need for cognition and high self-evaluation of their mathematical skills displayed no framing effect in the Asian disease scenario, while in the participants with a low need for cognition, the framing effect was obtained regardless of their mathematical skills. Frederick (2005) assumed that both logical and mathematical skills, as well as the motivation to suppress intuitive answers would influence decision making when the decision problems call for the rules of normative reasoning. The motivation to suppress first intuitions was measured by the Cognitive Reflection Test. Frederick (2005) observed that only the participants with low CRT scores more frequently chose the sure option in the domain of gain and the lottery in the domain of loss. People with high results in CRT were, however, more prone to take risk in the gain frame, avoid it in the loss frame and follow the expected value principle in their choices. Risk choices such as in the case of the occurrence of a framing effect may be connected with some additional reasons. An influence of higher education was analyzed by Fan (2017) on a Chinese example. Experiments described by Sparks and Ledgerwood (2017) show the dependence of risk decisions on some additional activities (in this case -the sequential framing effect). The research dealt with differences in education (mathematical or business) and the impact of additional activities (probabilistic or deductive incentives). The results do not cover the above mentioned but are based on the same method of seeking conditions which cause a framing effect. Hypotheses In the experiments presented above, a more formal way of introduction (e.g., the use of the word "statistical" instead of "medical") may reduce the framing effect. There may be a correlation between the mathematical competences of respondents and the occurrence of the framing effect because there is a conviction that people with high mathematical competences reason logically and therefore they are more resistant to the framing effect. Two experiments where respondents were divided into two groups -more and less mathematically oriented. Both groups completed a questionnaire with questions on the choice of programme in the Asian disease problem, half of them in the domain of gain, and half -domain (of what -this does not make sense). In these experiments an attempt is made to confirm the following hypotheses. H1: Mathematical competences cause a lack of the framing effect. H2: Probabilistic cognitive effort causes a lack of the framing effect. H3: Deductive cognitive effort causes a lack of the framing effect. H4: High mathematical competences cause probabilistic equivalence of programmes A and B (A' and B') to be observed more frequently than in case where such competences are low. The experiments are described in the next sections. First experiment -probabilistic incentives The first experiment was conducted by Sosnowska in 2013, with two groups of students of the Warsaw School of Economics (SGH). The first group consisted of first-year BA students. They only had basic mathematical competences but most of them passed extended mathematics as part of the matriculation exam (the group will be denoted as Nmat). They had just started their studies and learnt only a bit of mathematics. They also had not attended lectures on quantitative methods in economics as yet. The students were further randomly divided into two groups, one where a probabilistic incentive to cognitive effort was applied and second without such an incentive. A simple exercise about the probability of gathering special mushrooms, which implied intuitions connected with expected value, was used as the manipulation. These students did not know probability calculus. Groups will be denoted NmatPro 70 Economics and Business Review, Vol. 4 (18), No. 2, 2018 -with, and NmatNpro without the manipulation. In both groups subgroups which operate in the gain domain (NmatProG, NmatNproG) and loss domain (NmatProL, NmatNproL) were created. The second group of respondents consisted of second and third-year students specializing in quantitative methods at the Warsaw School of Economics. All of them had many lectures on advanced mathematics including the probability theory. Their mathematical competences based on their university mathematical education were significantly higher than those of the first group because they had much more experience in mathematics. This group of students will be denoted as Mat. They were divided into two groups, with (MatPro) and without (MatNpro) probabilistic incentive to cognitive effort. The experiment in the group with the incentive was conducted as part of an examination on probabilistic calculus, where students had to solve a task on expected value. The examination and this exercise played the role of the incentive. In both groups, subgroups were identified one operating in the domain of gain (MatProG, MatNproG) and the other operating in the domain of loss (MatProL, MatNproL). The numbers of respondents in each subgroup are presented in Table 1. It is shown in Table 1 that there was at least approximately 20 students in each group. In Table 2 the occurrence of the framing effect in groups NmatPro, NmatNpro, Matpro and MatNpro is presented. In the following tables the sum of percentages may not be equal to 100% because some respondents noted an equivalence of programmes or gave irrelevant answers. In the statistical analysis the critical value is 3.84 with α = 0.05. Table 2 shows that there is no unambiguous answer to hypothesis H1, H2. 71 A. Kaczmarek, K. Przybyszewski, D. Rutkowska, H. Sosnowska, The impact + NmatNproG, NmatL = NmatProL + NmatNproL) mathematical competences. The occurrence of the framing effect is studied (the last column). It is shown in Table 3 that hypothesis H1 is confirmed. More precisely it is shown in Figure 1. Results described in Table 3 show occurrence of the framing effect presented in Figure 1. It is shown in Table 4 that hypothesis H2 is not confirmed. This fact is presented more precisely in Figure 2. Occurrence of the framing effect is presented in the Figure 2. In Table 5 the percentage of respondents who noted the equivalence of programmes is presented. Figure 1. Comparison of respondents with and without mathematical competences Source: Own calculations. It is shown in Table 5 that hypothesis H4 is confirmed. Probabilistic incentives may make this effect stronger. Second experiment -deductive incentives The second experiment was conducted by Kaczmarek in 2015 using two groups of students of Polish universities. This time students differed a lot in their mathematical competences. The first group consisted of students of management at Kozminski University (denoted as Nmat). They knew only elementary mathematics and only participated in an introductory lecture on the application of mathematics in economics. Their mathematical competences were low, comparable to the British O-level examination. Two of the authors have experience in teaching in business universities and they have knowledge about their students' mathematical competence in comparison with students of mathematics or physics. They did not know the probability theory. They were divided into two groups, one where some deductive incentives were introduced and a second without such incentives. A logical riddle was used as manipulation Groups will be denoted NmatDed -with, and NmatNded -without the manipulation. In both groups subgroups operating in the domain of gain (NmatDedG, NmatNdedG) and the domain of loss (NmatDedL, NmatNdedL) were identified. The second group of students consisted of students of mathematics and physics at Warsaw University and Warsaw University of Technology. Their mathematical competences were high, they completed many courses on advanced mathematics. They were divided into two groups, one where some deductive incentives were introduced and a second -without such incentives. A geometrical exercise was used as an incentive. The groups will be denoted MatDed -with the incentive, and MatNded -without the incentive. In both groups subgroups operating in the domain of gain (MatDedG, MatNdedG), and in the domain of loss (MatDedL, MatNdedL) were identified. The number of respondents in each subgroup is presented in Table 6. It is shown in Table 6 that each subgroup is consisted of approximately 20 students. In Table 7 the occurrence of the framing effect is presented. In the statistical analysis the critical value is 3.84 with α = 0.05. Table 7 that hypotheses H1 and H3 are partly confirmed. In Table 8 respondents are divided into groups with (MatG = MatDedG + + MatNdedG, MatL = MatDedL + MatNdedL) and without (NmatG = NmatDedG + + NmatNdedG, NmatL = NmatDedL + NmatNdedL) mathematical competences. The occurrence of the framing effect is studied (the last column) and the 75 A. Kaczmarek, K. Przybyszewski, D. Rutkowska, H. Sosnowska, The impact conformity of numbers of respondents choosing programmes with expected numbers (the fourth column). Expected numbers are 50% and 50% because the programmes are equivalent. It is shown in Table 8 that hypothesis H1 is confirmed. It is presented more precisely in Figure 3. Occurrence of the framing effect is presented in Figure 3. In Table 10 the percentage of respondents who noted the equivalence of programmes is presented. It follows from Table 10 that hypothesis H4 is confirmed. Impact of deductive incentives is not observed. Conclusions The research shows that hypothesis H1 on the lack of a framing effect in the presence of mathematical competences was partly confirmed. There are situations where mathematical competences may cause decisions to be more rational. Hypothesis H2 on the impact of a probabilistic incentive on the rationality of decision-making over risk was not confirmed. Contrary to the last observation hypothesis H3 about the impact of a deductive incentive was confirmed. It may be interpreted that in a situation with deductive incentives the Kahneman brain II starts to work (Kahneman, 2011). Indeed in a think-aloud experiment Maule (1989) demonstrated that people can spontaneously reframe the "Asian disease" problem (elaborating it as both gain and loss) which removed the bias. (Arkes 1991) obtained similar results using a direct reframing instruction. Framing effects also appeared less frequently in the within-subjects design (LeBoeuf & Shafir, 2003). All of the above induce deductive reasoning. Probabilistic reasoning is difficult to use without hints and probably has a different cognitive structure. Mathematical competences allow for an observation that the programmes are equivalent in their expected value, yet they still differ in complexity and social context. Hypothesis H4 was confirmed. Probabilistic or deductive incentives make this observation more frequent. To summarize there is some impact of mathematical competences and deductive reasoning on the occurrence of the framing effect but they do not eliminate it. The question of whether mathematical competences make reasoning more rational requires analysis based on more examples. The study is limited to analyses of students with a lack of financial incentives. The limitations of the study indicate possible directions of future research. The first possible further research is to take essential financial incentives into consideration. The future study has to be conducted over a paid portal with an appropriate sum of money as remuneration. The second is comparing students of mathematics or mathematicians with a control group consisting of people unrelated to institutions of higher education. The possible development of research should follow the results that confirm H4. It may be argued that not only the motivation (and hence the cognitive effort) and competences (and hence the capability to solve the problem) are responsible for the occurrence or diminishing of the framing effect. The idea of the aims or goals behind the thinking processes seems to be a prom-ising explanation, i.e., the strenuous and competent thought, yet oriented at a normatively correct solution would remove the effect, while when aimed at 'socially desirable' solutions, it may be responsible for the occurrence of the effect.
2019-05-12T14:23:12.218Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "90e9ede72e75fc3bf0941f7bfe7fc587fc084341", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18559/ebr.2018.2.4", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6219ec5dc6bb3c9a02b622b6b914015724db08cf", "s2fieldsofstudy": [ "Mathematics", "Economics", "Psychology" ], "extfieldsofstudy": [] }
36398320
pes2o/s2orc
v3-fos-license
Expression of the serum response factor gene is regulated by serum response factor binding sites. The serum response factor (SRF) is a ubiquitous transcription factor that plays a central role in the transcriptional response of mammalian cells to a variety of extracellular signals. Notably, SRF has been found to be a key regulator of members of a class of cellular response genes termed immediate-early genes (IEGs), many of which are believed to be involved in regulating cell growth and differentiation. The mechanism by which SRF activates transcription of IEGs in response to mitogenic agents has been extensively studied. Significantly less is known about how expression of the SRF gene itself is mediated. We and others have previously shown that the SRF gene is itself transiently induced by a variety of mitogenic agents and belongs to a class of “delayed” early response genes. We have cloned the SRF promoter and in the present study have analyzed the upstream regulatory sequences involved in mediating serum responsiveness of the SRF gene. Our analysis indicates that inducible SRF expression requires both SRF binding sites located within the first 63 nucleotides upstream from the start site of transcriptional initiation and an Sp1 site located 83 nucleotides upstream from the start site. Maximal transcriptional activity of the promoter also requires two CCAATT box sites located 90 and 123 nucleotides upstream of the start site. The serum response factor (SRF) is a ubiquitous transcription factor that plays a central role in the transcriptional response of mammalian cells to a variety of extracellular signals. Notably, SRF has been found to be a key regulator of members of a class of cellular response genes termed immediate-early genes (IEGs), many of which are believed to be involved in regulating cell growth and differentiation. The mechanism by which SRF activates transcription of IEGs in response to mitogenic agents has been extensively studied. Significantly less is known about how expression of the SRF gene itself is mediated. We and others have previously shown that the SRF gene is itself transiently induced by a variety of mitogenic agents and belongs to a class of "delayed" early response genes. We have cloned the SRF promoter and in the present study have analyzed the upstream regulatory sequences involved in mediating serum responsiveness of the SRF gene. Our analysis indicates that inducible SRF expression requires both SRF binding sites located within the first 63 nucleotides upstream from the start site of transcriptional initiation and an Sp1 site located 83 nucleotides upstream from the start site. Maximal transcriptional activity of the promoter also requires two CCAATT box sites located 90 and 123 nucleotides upstream of the start site. SRF 1 is a ubiquitous transcription factor that is a key regulator of many extracellular signal-regulated genes important for cell growth and differentiation. SRF was first identified as a critical factor involved in mediating serum and growth factorinduced transcriptional activation of the c-fos proto-oncogene (reviewed in Ref. 1). The importance of SRF for growth factorregulated transcription is suggested by the identification of SRF binding sites (serum response elements) within the regulatory region of many other transiently expressed serum-inducible genes. These genes, which can be induced in the absence of new protein synthesis, have been termed cellular immediateearly genes (IEGs) (2). They include krox-20/egr-2 (3), egr-1/ zif-268/NGFI-A (4,5,6), cyr61 (7), pip92 (8), and members of the actin gene family (2). SRF has also been implicated in mediating IEG transcription in response to a variety of other agents, including agents that elevate intracellular calcium levels (9); viral activator proteins, such as the human T-cell lymphotropic virus type-1 activator protein Tax-1 (10,11) and the hepatitis B virus activator protein pX (12); activated oncogenes including v-src (13,14), v-fps (15), v-ras (16,17,18), and the activated proto-oncogene c-raf (19,20) as well as extracellular stimuli such as antioxidants (21), UV light (22), and microgravity (23). In addition to its role in mediating activation of genes expressed at early times after stimulation, some studies also suggest that SRF is involved in regulating later events, such as differentiation and cell cycle progression, presumably by regulating expression of key late response genes. Microinjection of anti-SRF antibodies blocks progression of stimulated fibroblasts from G 1 to S phase (24), suggesting that SRF or an SRF-related factor is important for controlling cell cycle progression. This function of SRF may be conserved through evolution, since genetic analysis has revealed that the yeast SRF homolog, MCM1, is involved in cell cycle progression (25). The observation that in yeast, MCM1 binding sites are found in the promoters of the cyclin genes cln3 and clnb2, and the gene for a cyclin interacting factor FAR1 (26), raises the possibility that SRF or a SRF-related factor may perform a similar function in mammalian cells. Other microinjection studies suggest that SRF is also important for differentiation in two myoblast lines, mouse C2 and rat L6. These studies show that SRF antibodies lead to downregulation of myogenin expression and block differentiation of myoblasts cells to myotubes (27), suggesting that SRF or SRFrelated factors directly or indirectly regulate muscle-specific transcription factors important for conferring the myogenic phenotype. Additional support for SRF playing a role in development of the myogenic phenotype comes from numerous studies that have identified SRF binding sites in the promoters of a number of muscle-specific genes. These include the cardiac and skeletal muscle actin (28 -30), dystrophin (31), myosin light chain (32), atrial natriuretic factor (33), and creatine kinase M promoters (34). In the case of the skeletal and cardiac actin and the dystrophin genes, SRF binding sites have been found to act as positive tissue-specific promoter elements. While the role of SRF in tissue-specific gene expression is unclear, it has been suggested that SRF may interact with other transcription factors to confer tissue-specific expression. One model for how SRF can mediate disparate phenotypic consequences suggests that SRF interacts with different classes of cell type-specific accessory proteins to confer distinct phenotypic responses (35). Consistent with this hypothesis, SRF has been shown to interact with different classes of factors including the homeodomain protein Phox-1 (35) and a class of transcriptional activator proteins known as ternary complex factors (TCFs) that are members of the Elk-1 subfamily of the ETS family of oncoproteins (reviewed in Ref. 36). It has also recently been reported that SRF and SRF-related proteins can interact through their conserved DNA binding/dimerization domain with myogenic basic helix-loop-helix proteins (37,38). In the case of SRF-mediated activation of IEGs, extensive studies of c-fos gene expression in fibroblasts indicate that in response to serum stimulation, SRF mediates gene activation by at least two distinct mechanisms (39). In one case, activation of the p21 ras signaling pathway leads to modification and subsequent activation of the TCF family of SRF-associated factors, thereby activating transcription. In a second, less well characterized SRF-dependent pathway, stimulation of cells can activate expression by a pathway that is dependent on members of the Rho subfamily of Ras proteins. This second pathway occurs in a TCF-independent manner. In both the TCF-dependent and -independent pathways, activation can occur in the absence of new protein synthesis and therefore relies on preexisting SRF protein. While much is known about how SRF activates expression of IEGs such as c-fos, little is known about how SRF regulates genes involved in later responses. One possibility is that newly expressed SRF protein may be involved. To begin to address how SRF may be involved in regulating late responses, we have studied expression of the SRF gene and protein. In previous studies (40) we and others (41) found that the SRF gene is itself an IEG since its transcription can be induced in the absence of new protein synthesis. In response to serum and purified growth factors, peak expression of SRF mRNA occurs at 90 -120 min after stimulation. The expression of SRF protein closely follows RNA expression. Unlike many IEG protein products SRF protein is relatively stable, having an in vivo half-life of 12-16 h (40). The stability of the SRF protein accounts for the apparent paradox that SRF protein is present prior to induction of the gene. In addition, the newly synthesized protein is extensively post-translationally modified by phosphorylation throughout the course of the cell cycle, raising the possibility that these modifications may be involved in regulating SRF's ability to control expression of late acting genes (40). The time of appearance of peak SRF mRNA levels suggests that the SRF gene belongs to a class of IEGs whose expression is delayed relative to other well characterized early IEGs such as the c-fos gene, whose peak expression occurs much earlier at 30 min after stimulation (42). Since delayed IEGs can be induced in the absence of new protein synthesis, their expression is not dependent on activation of early IEGs. Paradoxically, the SRF gene is inducible by many of the same agents that activate early IEGs. This suggests that temporal control of SRF expression occurs at a level downstream of the signaling pathways, such as at the level of transcriptional initiation or message stability. We have cloned the murine SRF promoter and in the present study have begun to address these issues by investigating the promoter regulatory sequences involved in mediating activation of the SRF gene. We have found that maximal serum responsiveness of the SRF promoter is dependent on two different types of cis-acting regulatory elements located within the first 300 nucleotides upstream of the start site of transcription. Our mutational analysis indicates that serum responsiveness of the SRF gene is autoregulated since SRF protein binding to its own promoter is necessary for serum inducibility but that binding of additional upstream factors to the SRF promoter is also required for maximal responsiveness. EXPERIMENTAL PROCEDURES SRF Promoter Isolation-Initially, a FIXII library (obtained from Stratagene) containing genomic DNA from mouse strain 129/SVJ was screened using a radioactively labeled probe consisting of DNA corre-sponding to the N-terminal portion of human SRF protein. clones that contained SRF-related sequences were digested with SacI restriction endonuclease, electrophoresed in a 0.75% agarose gel, and transferred by capillary action to HyBond membrane (Amersham Corp.) overnight. Air-dried filters were then wrapped in Saran wrap and UV-irradiated with a total dose of 1.2 joules/cm 2 using a Fisher Scientific FB-UVXL-1000 UV cross-linker. Fragments containing the SRF promoter were identified using a radiolabeled probe consisting of a 305-base pair HindIII/DdeI restriction fragment isolated from the human SRF cDNA clone pT7⌬ATG (41). Labeling was performed as described by Feinberg and Vogelstein (43). Hybridization was performed overnight at 65°C in a Hybaid rotary oven under conditions described by Church and Gilbert (44). Restriction fragments that contained SRF N-terminal sequences were cloned into the SacI restriction site of pBluescript (Stratagene). Double-stranded sequencing was performed by the method of Sanger et al. (45). The Genetics Computer Group Bestfit software was utilized to identify homology with the human SRF cDNA. Luciferase Reporter Plasmid Construction-SRF-luciferase reporter vectors were constructed by cloning various portions of the SRF promoter into the pGL2 Basic luciferase vector (Promega). Initially, a BglII-HindIII fragment (Ϫ2500 to ϩ679) was cloned into the BglII and HindIII restriction sites of pGL2 Basic. The construct was then digested with HindIII (ϩ679) and NotI (ϩ229), and the DNA ends were blunted with Klenow fragment and then religated. Promoter deletion constructs were constructed by using restriction endonuclease cleavage sites present in the SRF promoter. Cell Culture, Transfection, and Luciferase Assays-NIH3T3 cells were grown in 5% CO 2 in Dulbecco's modified Eagle's medium (Life Technologies, Inc.) containing 10% heat-inactivated calf serum, 0.01% penicillin, and 0.01% streptomycin. All transfections were performed with the calcium phosphate co-precipitation technique as described (46) using supercoiled DNA purified by cesium chloride density ultracentrifugation. Eighteen hours before transfection, cells were seeded at a density of 5 ϫ 10 5 cells/60-mm dish. Transfection mixtures contained 3.5 g of SRF-reporter and 1 g of Rous sarcoma virus-␤-galactosidase as a transfection efficiency control. In all cases, the DNA concentration was adjusted to 7.5 g with pUC19 DNA. Cells were incubated 12-16 h with the DNA/calcium phosphate precipitate, washed two times with phosphate-buffered saline (pH 7.4), and made quiescent by the addition of Dulbecco's modified Eagle's medium supplemented with 0.5% calf serum, 0.01% penicillin, and 0.01% streptomycin for 48 h. Serum stimulation was achieved by replacing the starvation media with Dulbecco's modified Eagle's medium containing 20% fetal calf serum (Life Technologies, Inc.). Preliminary studies showed that maximum inducible luciferase activity was achieved 2 h after stimulation. Cells were harvested in reporter lysis buffer (Promega), frozen on dry ice, thawed, vortexed for 15 s, and centrifuged for 20 s in a microfuge at 10000 ϫ g. The supernatant was used for analysis. Luciferase assays and ␤-galactosidase assays were performed as described by Promega. Luciferase activity was measured on a Berthold AutoLumat LB953 luminometer. In all cases, assays were performed in triplicate and experiments were repeated at least three times. DNA Electrophoretic Mobility Shift Assays and Nuclear Extract Preparation-Radioactively labeled probes for use in gel mobility shift assays were prepared by polymerase chain reaction using an SRF promoter fragment (Ϫ165 to ϩ14) as a template and 32 P end-labeled oligonucleotide primers. The products were gel-purified. To ensure that the probes were of equivalent specific activity the same set of primers was used to generate each probe. Primer sequences used in polymerase chain reaction were 5Ј-GCAGCGAGTTCGGTATGTC-3Ј and 5Ј-GG-TATCCCCCAACCCTTCC-3Ј, respectively. In brief, binding conditions in a 20-l volume were 0.2 mM dithiothreitol, 16% glycerol, 2 mM spermidine, 20 g of bovine serum albumin, 2 g of linear pUC19, 0.2 g of poly(dI-dC), and 0.1-1 ng of labeled DNA probe (25,000 cpm). In vitro translated SRF was added and incubated at 4°C for 10 min before the addition of labeled probe. After probe addition, incubation was continued 15-20 min at room temperature. The total reaction was electrophoresed in a 4% polyacrylamide gel in 0.5 ϫ Tris borate-EDTA, the gels were dried, and autoradiography was performed. Competition studies were under the same conditions, except competitor DNA was incubated in the binding reaction for 30 min on ice before the addition of the labeled probe. The addition of the probe at later times after preincubation with competitor, or incubation of the probe in the reaction mixture for longer times, gave identical results, suggesting that equilibrium was established under the conditions used. Quantitation of the competition studies was done using a Molecular Dynamics Phos-phorImager. Serum-starved and serum-stimulated NIH3T3 nuclear extracts were prepared by the method of Dignam (47). For mobility shift reactions using nuclear extracts, 6 g of protein was used and incubation times were doubled. Conditions for the antibody supershift experiments were identical to the shifts of nuclear extract with the exception that a 1:50 dilution of anti-SRF antibody R1122 (described in Ref. 40) was added to the shift reactions 10 min prior to electrophoresis. Mutagenesis-Site-directed mutations were introduced in context in the SRF promoter by the technique of Deng and Nickoloff (48). The template for mutagenesis reactions was region Ϫ322 to ϩ229 of the SRF gene in a pUC19 background. The specific base changes were chosen based on their ability to disrupt factor binding in vitro as documented in the following references: CArG box 1, CCATAAAAGG to CCATA-AAATT (this work); CArG box 2, CCATATAAGG to CCATATAATT (this work); SP1/Ϫ254, GGGCGGG to GGTCTGG (49); SP1/Ϫ83, GGGGCGGGGGCG to GGGGCTTTGGCG (49); CCAATCCAAT to AAAAT (50). Combinations of mutations were generated either by performing mutagenesis reactions on a template already containing a mutation or by subcloning DNA fragments containing the appropriate mutation. RESULTS Structure of the SRF Promoter-To isolate sequences corresponding to the mouse SRF gene we used a 305-base pair restriction fragment, derived from the human SRF cDNA clone pT7⌬ATG (41), to screen a mouse genomic library. This probe contained sequences derived from the 5Ј region of a human SRF cDNA clone. Restriction analysis of the products of this screen identified three clones that contained overlapping sequences (data not shown). One clone containing a 5-kb fragment of murine genomic DNA was picked for further analysis. To verify that this clone contained the gene encoding the SRF protein and not a family member, a portion was sequenced and compared with the sequence of a partial mouse SRF cDNA clone. The mouse SRF cDNA clone was previously shown to encode a functional SRF protein and to have high homology to human SRF cDNA. 2 This comparison revealed the clone to be 100% homologous to the mouse cDNA over the region corresponding to SRF sequences coding for the first 228 amino acids (data not shown). In addition, the presence of an intron (Ͼ1 kb) between the codons for amino acids 167 and 168 was noted. To determine whether the clone we had isolated contained SRF promoter regulatory sequences, the region of the clone 5Ј to the protein-coding sequence was sequenced and compared with a full-length human cDNA in which the start site of transcription had been previously mapped (51). As shown in Fig. 1, there was 93% identity at the nucleotide level between the human cDNA and the mouse genomic clone over this region and 97% identity over the 100 nucleotides immediately 3Ј to the start site of transcription of the human gene. Based on this analysis, the start site of transcriptional initiation in the mu-2 D. Pak and R. P. Misra, unpublished results. FIG. 1. Nucleotide sequence of the mouse SRF promoter and 5-untranslated regions. Boldface and underlined sequences include two CArG boxes, two CCAAT boxes, two Sp1 sites, and one high affinity Ets binding site. The lower sequence corresponds to the 5Ј-untranslated region of the human SRF cDNA (51). The vertical lines indicate sequence identity, and the dots correspond to gaps inserted in the sequence for optimal alignment. The transcription start site is marked as ϩ1, and the initiation methionine codon at position 354 is boldface and underlined. rine clone was assigned, and the 5-kb genomic fragment was determined to contain 1 kb of SRF coding sequence and 4 kb of sequence 5Ј to the start site of transcriptional initiation. To measure serum responsiveness of the SRF promoter, a 2.7-kb restriction fragment spanning Ϫ2500 to ϩ229 relative to the start site of transcriptional initiation was then isolated from the 5-kb fragment and inserted into a luciferase reporter construct. Serum-inducible Expression Is Mediated through the SRF Promoter-Previously we have shown that the SRF gene is transiently induced when serum-starved NIH3T3 cells are treated with serum (40) or purified growth factors. 3 In unstimulated serum-starved NIH3T3 cells SRF mRNA levels are virtually undetectable. SRF mRNA reaches a maximum by 90 -120 min after stimulation of cells with 20% fetal calf serum and then returns to nearly basal levels by 6 h after stimulation. To determine the regulatory elements required for transcriptional activation of the SRF gene, progressive 5Ј promoter deletion constructs containing different amounts of the SRF promoter and 229 nucleotides of the SRF 5Ј-untranslated region were fused to a luciferase reporter gene. These constructs, schematically depicted in Fig. 2A, were transiently transfected into NIH3T3 cells, and luciferase activity was measured after serum starvation or 2 h after serum stimulation of starved cells. To normalize for transfection efficiency, each construct was co-transfected with a constitutively expressed Rous sarcoma virus-␤-galactosidase reporter. The results of one typical set of experiments are shown in Fig. 2B. Upon serum stimulation of cells containing a reporter with 2500 nucleotides of upstream sequence, there is an approximately 5-fold increase in luciferase activity relative to the unstimulated cells. Roughly the same -fold stimulation is observed for constructs in which all but 111 nucleotides of upstream sequence have been deleted. In contrast, a construct with 35 nucleotides of upstream sequence, containing only the SRF TATA element, is stimulated 1.3-fold. These results indicate that the major sequence determinants of serum responsiveness in the SRF promoter reside between 35 and 111 nucleotides upstream from the start site of transcription. Fig. 2 also shows that while the -fold stimulation of the Ϫ111 construct is similar to the Ϫ322 and Ϫ2500 constructs, there is a dramatic decrease in the absolute level of expression of the Ϫ111 construct in both stimulated and unstimulated cells. This effect is even more pronounced in the Ϫ35 minimal construct, suggesting that sequences both between Ϫ35 and Ϫ111 and between Ϫ111 and Ϫ322 are involved in regulating transcriptional efficiency of the SRF gene. The results presented in Fig. 2 indicate that sequences required for maximal serum-stimulated expression of the SRF gene are present within the first 322 nucleotides upstream of the start site of transcription. Computer analysis of the sequence of this region identified a number of potential regulatory elements that are in boldface type and underlined in Fig. 1 ing motif at Ϫ103, and two CCAAT box elements located at Ϫ90 and Ϫ123. A CArG Box and the Ϫ83 Sp1 Binding Site Are Major Determinants That Mediate Serum Induction of the SRF Promoter-To begin to determine which of the potential regulatory elements located in the Ϫ35 to Ϫ322 region mediate serum responsiveness of the SRF promoter, we mutated the Sp1, CCAAT box, or CArG box elements, either alone or in various combinations and then measured luciferase activity before and after serum stimulation. Point mutations of each putative regulatory element, which abolish factor binding, were introduced into the indicated elements in the context of the wild type Ϫ322 reporter construct (Fig. 3A), and their effects on expression were tested in transient transfection assays. As seen in Fig. 3B, the most dramatic effect on serum responsiveness occurred when both CArG boxes were simultaneously mutated or when the Ϫ83 Sp1 site was mutated. Induction of the double CArG mutant was reduced from 5-fold for the wild type Ϫ322 construct to 1.3-fold for the mutant. This value was similar to the induction of the Ϫ35 TATAonly minimal construct. Mutation of either CArG box alone had either no effect (CArG box 1), or a modest 25% reduction in responsiveness (CArG box 2). Since as shown below, the responsiveness of the individual CArG boxes correlates with their relative affinity for SRF, these results suggest that they may serve redundant functions during serum stimulation of the SRF gene. Similarly, mutation of the Sp1 site at Ϫ83 diminished serum responsiveness to a level comparable with that of the double CArG box mutant, reducing the induction to 1.5-fold. This effect appears to be dependent on the distance of an Sp1 site from the start site of transcription since mutation of the Sp1 FIG. 3. Functional analysis of SRF promoter elements. A, schematic representation of reporter constructs used. In each case, site-specific mutations that disrupted the indicated binding sites (see "Experimental Procedures") were introduced in context into SRF-luciferase reporter constructs containing 322 nucleotides of sequence upstream of the transcriptional start site. B, luciferase assays were carried out on extracts from NIH3T3 cells transiently transfected with the indicated reporter constructs and either serum-starved (Ϫ) or serum-stimulated (ϩ) for 2 h. Basal expression refers to the level of luciferase activity in unstimulated cells transfected with the indicated reporter construct. The -fold induction is determined for each construct by comparing the luciferase activity in the stimulated and unstimulated case. The basal expression of the Ϫ322 construct containing wild type elements is arbitrarily assigned a value of 100%. Luciferase activity is reported as relative light units (RLU). For each point, values were determined in triplicate and corrected for transfection efficiency. Results from at least three independent experiments are shown (means Ϯ S.E.). site located at Ϫ254 has significantly less effect on serumstimulated expression of the Ϫ322 construct, reducing activation approximately 33% from 5-to 3.2-fold. In contrast to the effect of the CArG box or Sp1 site mutants, mutations in either one or both CCAAT boxes located at Ϫ90 and Ϫ123 have virtually no effect on the -fold induction of the Ϫ322 construct. Together, these results suggest that the factors that bind the Ϫ83 Sp1 site and the CArG boxes are together responsible for mediating serum responsiveness of the SRF promoter. Transcriptional Efficiency of the SRF Promoter Is Regulated by the Ϫ90 and Ϫ123 CCAAT Boxes-As seen in Fig. 2, while the -fold induction of both the Ϫ322 and Ϫ111 constructs are similar upon serum stimulation, the overall transcription efficiency of the Ϫ111 construct is dramatically reduced. This effect is observed for both unstimulated and stimulated expression, suggesting that elements contained between Ϫ111 and Ϫ322 are important for basal transcription. A likely candidate for one element is the CCAAT box located at Ϫ123. Mutation of this site leads to a 60% reduction in the expression of the Ϫ322 construct in unstimulated cells. When the Ϫ90 CCAAT box is mutated there is a 30% reduction in expression. An even more dramatic effect is observed when both the Ϫ90 and Ϫ123 elements are mutated. Expression from the double mutant is reduced to 13% of wild type levels in unstimulated cells, comparable with expression of the Ϫ111 construct. These results suggest that two CCAAT boxes are required for maximal transcriptional efficiency of the SRF promoter. CArG Box 1 and CArG Box 2 Bind SRF-It has previously been shown that CArG box-containing elements can mediate serum-stimulated gene expression by an SRF-dependent mechanism. In the case of the c-fos SRE, which is the most extensively studied CArG box-containing element, a CArG box is flanked on either side by regions of imperfect dyad symmetry (52). In fibroblasts, SRF appears to be the major c-fos CArG box binding factor, although at least eight other transcription factors have been shown to interact with the c-fos SRE in vitro (53,54). Mutations in the c-fos CArG box that abolish SRF binding also abolish serum responsiveness of the c-fos promoter, and minimal SRF binding sites are capable of imparting serum responsiveness to promoter minimal reporter plasmids (46). In the case of the c-fos promoter, these studies indicate that serum responsiveness requires SRF. In the case of the SRF promoter, the results in Fig. 3 indicate that a reporter containing mutant CArG boxes, incapable of binding SRF, is severely impaired in its serum responsiveness. This suggests that SRF is responsible for mediating the serum response. Therefore, we wanted to determine whether SRF was a major SRF CArG box binding protein in NIH3T3 nuclear extracts. To do this, we performed electrophoretic mobility shift assays using nuclear extracts prepared from either serum-starved cells or cells that had been serum-stimulated for 2 h and a radioactively labeled probe consisting of SRF sequences spanning Ϫ165 to ϩ14, which included CArG boxes 1 and 2. As seen in Fig. 4, extracts from either stimulated or unstimulated cells formed two complexes with this probe with distinctly differing mobilities, labeled I and II. To determine whether either of these complexes contained SRF, antibody supershift assays were performed using polyclonal antibodies generated to the N-terminal half of the human SRF protein (40). As seen in Fig. 4, lanes 2 and 5, these antibodies specifically supershift only the slowest migrating complex, I, indicating that this complex contains SRF. The observation that a SRF-containing complex is present in extracts from both stimulated and unstimulated cells suggests that SRF is constitutively bound to the SRF promoter. CArG Box 1 and CArG Box 2 Bind SRF with Similar Affinity-To determine the relative affinity of SRF for CArG box 1 or CArG box 2, binding assays using in vitro translated SRF were performed. In one experiment, radiolabeled wild type probe was competed with increasing concentrations of nonlabeled DNA containing either CArG box mutant. As seen in Fig. 5 both CArG boxes compete effectively with the wild type SRF promoter fragment for SRF binding. Under the binding conditions used here, however, CArG box 2 has an approximately 2-fold greater affinity for SRF than CArG box 1. In a second experiment, the relative affinity of CArG box 1 and CArG box 2 for SRF was determined by measuring their ability to compete against each other for SRF binding. In Fig. 5B, it can be seen that CArG box 2 competes approximately 2-fold more efficiently for binding to a CArG box 1 labeled probe than CArG box 1 competes for a CArG box 2 labeled probe (e.g., compare lanes 4 and 8). In this experiment, to insure that the specific activities of both probes were identical, oligonucleotide primers were labeled and used in separate polymerase chain reaction reactions using templates containing either a CArG box 1 or CArG box 2 mutation. Identical amounts of radioactivity from each probe synthesis reaction were then added to each shift reaction. The relative binding was determined by directly comparing the differing amounts of shifted probe and was quantified by PhosphorImager analysis. Binding of SRF Simultaneously to CArG Box 1 and CArG Box 2 Is Inefficient-Mutant reporter constructs that contain only one functional CArG box are capable of responding to serum nearly as well as a wild type promoter containing two intact CArG boxes (Fig. 3). This suggests that in the case of serum stimulation, CArG boxes 1 and 2 perform redundant functions. This raises the possibility that SRF binding to each CArG box may be mutually exclusive. To determine whether SRF was capable of simultaneously binding both CArG boxes 1 and 2, in vitro translated SRF was complexed with probes mutated in either CArG box 1 or box 2, and the mobility of FIG. 4. SRF protein binds the SRF promoter in vivo. DNA mobility shift assays were carried out using nuclear extracts from either serum-starved or serum-stimulated NIH3T3 cells, and a 32 P-labeled DNA probe corresponding to Ϫ165 to ϩ14 of the SRF gene. Reactions were electrophoresed on a nondenaturing polyacrylamide gel. The positions of the free probe, two specific complexes (I and II), and the SRF-DNA complex whose migration is further retarded by interaction with anti-SRF antibodies (Ab/SRF/DNA) or left unaffected by the addition of preimmune sera (PI), were detected by autoradiography. Complexes I and II were competed with the SRF promoter DNA fragment but not with a nonspecific DNA fragment (not shown). these complexes was compared with the mobility of complexes formed with the wild type probe. As shown in Fig. 5, the mobility of the complex formed with either mutant probe is indistinguishable from the mobility of the major complex formed with the SRF wild type probe (Fig. 5, compare panels A and B). In the case of the wild type probe, however, long exposures of the autoradiographs reveal an additional slower mobility complex (Fig. 5A, lanes 1 and 5). This slower mobility complex is not detected when CArG box mutant probes are used (Fig. 5B, lanes 1-7) or when reticulocyte lysate alone is used (not shown), suggesting that the slower mobility complex reflects simultaneous binding of SRF to both CArG boxes. This slower mobility complex is likely to reflect inefficient binding of SRF to both sites, since it is not observed when autoradiographs are exposed for shorter periods of time, or when less SRF protein is used in the shift reactions (Fig. 5A, lanes 1, 5, and 9). In addition, increasing the amount of SRF added to the shift reactions does not change the ratio of the slower to faster migrating complex. Together with the observation that CArG boxes 1 and 2 differ in their affinity for SRF by only 2-fold, this suggests that formation of the slower mobility complex is not dependent on first saturating a high affinity site. The poor efficiency of formation of the slower complex suggests that in vitro, on a single probe molecule, occupancy of both CArG boxes by SRF is largely mutually exclusive. DISCUSSION In the present study we have examined the promoter regulatory elements involved in mediating serum induction of the SRF gene. Our results show that a 111-nucleotide sequence immediately upstream of the SRF start site of transcriptional initiation is sufficient to confer serum responsiveness to a heterologous reporter gene. This region contains two CArG boxes and an Sp1 binding site. While Ϫ111 constructs exhibit serum responsiveness, maximal expression also requires additional sequences located between Ϫ111 and Ϫ322. This region does not affect -fold stimulation, suggesting that elements contained in this region are not targets for regulation by serumstimulated signaling pathways. However, this region does appear to affect the transcriptional efficiency of the SRF promoter as evidenced by the elevated levels of both induced and basal transcription observed from the Ϫ322 reporter constructs. Our analysis also indicates that transcriptional efficiency is affected by two CCAAT box elements located 90 and 123 nucleotides upstream from the start site of transcriptional initiation. Disruption of one or both of these elements decreases overall transcription. Previously, it was shown that the CCAAT box binding factor NF-Y can facilitate in vivo recruitment of upstream factors in the HLA-DRA promoter (50). It has been proposed that CCAAT box factors may function to enhance transcription by stabilizing binding of upstream factors. Our observations are consistent with this role of CCAAT box binding factors in the SRF promoter. What protein factors are required for mediating serum responsiveness of the SRF promoter in vivo? Our analysis indicates that at least one of two CArG box sequences located between Ϫ35 and Ϫ111 nucleotides is required for mediating serum stimulation of a luciferase reporter. CArG box 2 is identical to a serum and growth factor-responsive CArG box found in the zif-268 promoter (5). CArG box 1 and 13 of 15 flanking nucleotides match the reverse sequence of a chick ␤-actin promoter element (55). In vitro both CArG boxes bind SRF with similar affinities, yet binding to SRF appears to be mutually exclusive. Previously, it has been shown in in vitro protection analyses that SRF protects 10 nucleotides flanking either side of the c-fos SRE CArG box (52). Since CArG boxes 1 and 2 in the SRF promoter are separated by 10 base pairs, this raises the possibility that the inability of SRF to efficiently bind both CArG boxes simultaneously may be due to steric hindrance. Electrophoretic mobility shift assays using nuclear extracts from either serum-starved or serum-stimulated NIH3T3 extracts, performed in the presence or absence of antibodies specific for SRF, suggests that SRF also binds CArG box 1 or 2 in vivo. Serum responsiveness of reporter constructs containing mutant CArG boxes, which are deficient for SRF binding in vitro, placed in their natural context in the wild type promoter is drastically reduced. Taken together, these results indicate that SRF is likely to be a major regulator of SRF promoter serum responsiveness. Studies of the mechanism by which SRF mediates inducible FIG. 5. A, SRF binds CArG boxes 1 and 2 with similar affinity. DNA mobility shift assays were carried out using in vitro translated SRF, a 32 P-labeled DNA probe corresponding to Ϫ165 to ϩ14 of the SRF gene, and the indicated molar excess of competitor DNA consisting of unlabeled promoter fragment containing point mutants in CArG box 1, CArG box 2, or both. Lanes 1-8 were exposed longer than lanes 9 -13 to reveal the additional complex. B, binding of SRF simultaneously to CArG box 1 and CArG box 2 is inefficient. DNA mobility shift assays were carried out using in vitro translated SRF and a 32 P-labeled DNA probe corresponding to Ϫ165 to ϩ14 of the SRF gene containing a fragment containing mutant CArG box 1 or mutant CArG box 2 sequences. Each reaction also contained the indicated molar excess of competitor DNA consisting of unlabeled promoter fragment containing a mutation in the reciprocal CArG box. Mutant CArG boxes are represented by X-filled boxes and wild type by open boxes. Reactions were electrophoresed on a nondenaturing polyacrylamide gel. The positions of the free probe and the SRF-DNA complex were detected by autoradiography and subsequently quantified by PhosphorImager analysis. Unprogrammed reticulocyte lysates gave no observable complex (not shown). transcription, carried out mainly on the c-fos promoter, have revealed that SRF acts to activate transcription by interacting with other transcription factors. These include YY1 (56), Phox-1 (35), and members of the Elk-1 subfamily of the Ets family of transcription factors. SRF has also recently been shown to interact with myogenic basic helix-loop-helix regulatory factors (38); however, the significance of this interaction for serum-regulated gene expression is unclear. In the SRF promoter, CArG boxes 1 and 2 contain overlapping consensus binding sites for the YY1 transcription factor. YY1 is a multifunctional transcription factor that has been reported to function in transcriptional activation and repression. Recent reports indicate that the ability of YY1 to bend DNA may account for some of these disparate functions (56,57). It has been proposed that YY1 acts as a structural factor to organize DNA-protein complexes that form on a promoter. In the c-fos promoter YY1 either represses or enhances transcription, depending on which promoter binding site it occupies. When YY1 binds to the c-fos SRE it bends DNA in a fashion that enhances SRF binding in the c-fos promoter, thereby potentiating transcription. In contrast, when YY1 binds to a site located between the c-fos cAMP response element and the TATA box it bends DNA in a fashion that represses transcription. Based on our mutational analysis, the role of YY1 in mediating serum responsiveness of the SRF promoter is unclear. It is likely that YY1 binding is not sufficient to mediate serum induction, since double CArG box mutants, in which YY1 binding sites are left intact, are unresponsive to serum stimulation. However, since it has been reported that YY1 can potentiate SRF binding and SRF-mediated transcription by transiently binding to SRF-occupied CArG boxes, it is possible that YY1 may be playing a role in potentiating the serum response of the SRF promoter. So far, however, under the shift conditions used here we have not been able to detect YY1 in CArG box complexes containing SRF. However, we are continuing to investigate a possible role for YY1 in SRF promoter activity. Analysis of the c-fos promoter has also identified a family of SRF-associated factors involved in mediating SRF-dependent activation of the c-fos gene (reviewed in Ref. 36). These factors, which belong to the Ets-1 family of transcription factors, have been termed p62 TCF s, based on their ability to interact with an SRF-DNA complex to form a ternary complex. In the c-fos promoter, p62 TCF s bind to a site immediately adjacent to the c-fos SRE. Interaction of p62 TCF s with SRF allows p62 TCF to bind a CAGGAT binding site adjacent to the c-fos SRE. In the case of activation of the c-fos gene, various mitogenic agents including serum, phorbol ester tumor promoters, and purified growth factors can stimulate mitogen-activated protein kinasedependent activation of p62 TCF , thereby stimulating c-fos expression. In some cases, such as phorbol ester-mediated activation, SRF-dependent gene expression is p62 TCF -dependent (58). In other cases, such as serum-mediated activation, SRFdependent activation can also occur in a TCF-independent fashion, indicating that alternative SRF-mediated activation pathways exist (58). Since p62 TCF s have been demonstrated to play a role in serum mediated gene expression we also wanted to determine whether p62 TCF may be involved in mediating serum stimulated expression of the SRF promoter. We investigated whether in vitro translated p62 elk-1 was capable of forming a ternary complex with SRF on the SRF promoter. A consensus Ets motif (C/A)(C/A)GGA(A/T) important for promoting efficient ternary complex has been identified (59). In the first 322 nucleotides upstream of the start site of transcription, there are three potential consensus Ets binding sites located at Ϫ32 (overlap-ping CArG box 1), Ϫ103, and Ϫ195. We found that under conditions in which SRF and p62 elk-1 formed an efficient ternary complex with the c-fos SRE, we were unable to detect efficient ternary complex formation using a SRF promoter fragment spanning Ϫ165 to ϩ14 (data not shown). We found similar results using either in vitro translated p62 elk-1 or nuclear extracts from NIH3T3 cells. Our observations suggest that serum-mediated activation of the SRF promoter is not mediated by ternary complex formation, although it is possible that in vivo ternary complex formation may be occurring, which we are unable to detect using our in vitro shift conditions. Alternatively, it is possible that ternary complex formation may be occurring preferentially using the Ϫ195 site. In addition to SRF-dependent DNA binding, p62 TCF s can also bind DNA autonomously through high affinity Ets binding sites (59). Thus, although ternary complex formation may not be important for serum responsiveness, TCF proteins may be playing a role in serum stimulation by autonomous binding to the SRF promoter. One such site, identical to an Ets binding site found in the Drosophila E74 gene, is located at Ϫ103 of the SRF promoter. DNA binding assays reveal that in vitro translated p62 elk-1 can bind efficiently to this site (not shown). This site, however, does not appear to be necessary for serum responsiveness of the Ϫ322 reporters, although basal expression may be affected (data not shown). We have also not been able to detect Elk-1 in complexes formed with NIH3T3 extracts using anti-Elk-1 antibodies. Together, these observations suggest that factor binding to the Ϫ103 Ets site is not necessary for serum responsiveness in vivo. This is consistent with the observation that serum responsiveness of the c-fos promoter can occur in a SRF-dependent yet TCF-independent manner (58). However, a more careful analysis of the factors that bind this region of the SRF promoter in vivo and their effect on SRF promoter responsiveness is still required. While our analysis suggests that YY1 and Ets factors are likely not to be necessary for serum stimulation of the SRF promoter, the results shown in Fig. 3B suggest that another transcriptional activator protein Sp1 may act together with SRF to activate serum mediated expression. Disruption of either Sp1 binding site in the SRF promoter reduces the level of stimulation of the Ϫ322 reporter while leaving basal expression unaffected. In particular, disruption of the Ϫ83 Sp1 site has significantly more effect than disruption of the Ϫ254 site, reducing expression to levels comparable with the double CArG box mutant. Since Sp1 sites in the absence of intact CArG boxes are not sufficient to mediate serum stimulation of the SRF promoter, these results suggest that in the context of an intact SRF promoter Sp1 may interact with CArG box factors to mediate serum stimulation. One possibility is that Sp1 is directly interacting with SRF to mediate serum responsiveness. This interpretation is supported by the observation that fulllength SRF and Sp1 can interact in a yeast two-hybrid assay system. 4 Consistent with the idea that SRF and Sp1 can interact to mediate gene expression, intact SRF and Sp1 binding sites have been shown to be important for regulating musclespecific expression of the cardiac ␣-actin gene (60). While our results in Fig. 3B suggest that Sp1 is involved in mediating serum responsiveness of the SRF gene, the mutation used to disrupt Sp1 binding also disrupts a zif-268 consensus binding site. It is therefore possible that zif-268 or a related factor may be playing a role and not Sp1. We are currently investigating further the nature of Sp1 and SRF interactions and their role in SRF promoter serum responsiveness as well as the role of zif-268 in the function of the Ϫ83 element. 4 D. Krainc and R. Misra, unpublished observations. The SRF gene belongs to a class of IEGs whose expression is delayed relative to other IEGs. The molecular basis for temporal control of expression of different classes of IEGs is not known. One possibility is that similar signaling pathways target distinct complexes of promoter regulatory factors to regulate temporality of expression. Our observations that, although the SRF promoter is regulated in an SRE-dependent manner, different combinations of interacting elements are required for maximal expression relative to other SRE-controlled early IEGs is consistent with this interpretation. One intriguing possibility is that SRE-dependent promoters that are regulated in a TCF-dependent fashion, such as the c-fos promoter, may be expressed at earlier times than SRE-dependent promoters, such as the SRF promoter, in which SRF interacts with other transcription factors. We are currently investigating this hypothesis.
2018-04-03T03:46:10.391Z
1996-07-12T00:00:00.000
{ "year": 1996, "sha1": "dd984c476216203f568d01a898075041f754c0c7", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/28/16535.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e5ae88343b58930283ce67a94b44955f914e2807", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268374874
pes2o/s2orc
v3-fos-license
Survival analysis under imperfect record linkage using historic census data Background Advancements in linking publicly available census records with vital and administrative records have enabled novel investigations in epidemiology and social history. However, in the absence of unique identifiers, the linkage of the records may be uncertain or only be successful for a subset of the census cohort, resulting in missing data. For survival analysis, differential ascertainment of event times can impact inference on risk associations and median survival. Methods We modify some existing approaches that are commonly used to handle missing survival times to accommodate this imperfect linkage situation including complete case analysis, censoring, weighting, and several multiple imputation methods. We then conduct simulation studies to compare the performance of the proposed approaches in estimating the associations of a risk factor or exposure in terms of hazard ratio (HR) and median survival times in the presence of missing survival times. The effects of different missing data mechanisms and exposure-survival associations on their performance are also explored. The approaches are applied to a historic cohort of residents in Ambler, PA, established using the 1930 US census, from which only 2,440 out of 4,514 individuals (54%) had death records retrievable from publicly available data sources and death certificates. Using this cohort, we examine the effects of occupational and paraoccupational asbestos exposure on survival and disparities in mortality by race and gender. Results We show that imputation based on conditional survival results in less bias and greater efficiency relative to a complete case analysis when estimating log-hazard ratios and median survival times. When the approaches are applied to the Ambler cohort, we find a significant association between occupational exposure and mortality, particularly among black individuals and males, but not between paraoccupational exposure and mortality. Discussion This investigation illustrates the strengths and weaknesses of different imputation methods for missing survival times due to imperfect linkage of the administrative or registry data. The performance of the methods may depend on the missingness process as well as the parameter being estimated and models of interest, and such factors should be considered when choosing the methods to address the missing event times. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-024-02194-6. Introduction Publicly available individual U.S. census records spanning 150 years (1790-1940), which are re-identified 72 years after the respective census dates, offer a rich resource for studying demographic, social, and economic characteristics of the U.S. population at various points in history, as well as changes over time.Census records are particularly useful for investigating sociological and epidemiological questions when matched with vital records such as birth, death, and marriage certificates from state-run registries or other data sources [1].For example, Beach et al. [2] studied the effect of childhood typhoid exposure in the late 1800s on earnings and educational attainment later in life, by linking city-year level typhoid fatality rates to children in the 1900 census, which are then linked with adult records from the 1940 census.In another study, Ferrie et al. [3] investigated the impact of lead exposure on test scores by using the 1930 census to estimate lead exposure for children through water supplies and linking it with test scores for World War II enlistees. However, in the absence of unique identifiers across data sources, the linkage between census records and vital records is not always successful, resulting in missing or misclassified data for a substantial portion of the census population.Unlike the decennial census which is conducted on a national level, vital registries are decentralized and managed on a state-by-state basis.They were developed much later and had uneven and sparse coverage compared to the national census, especially before 1933 [4,5].Federal agencies such as the National Center for Health Statistics (NCHS) were later established to collect information from the state registries in a centralized database, but coverage may not extend to the earliest years of record collection.For example, the earliest records in the National Death Index (NDI) date to 1979, whereas vital records were kept as early as 1881 in states like New York and Pennsylvania.Furthermore, the NDI uses a computerized probabilistic scoring algorithm to match vital records based on variables such as social security number, month, day, and year of birth, first and last name, and state of residence, among others.The absence or misclassification of any of these variables (for example due to changes in name or place of residence) reduces the probability of a successful match.Census records contain limited information on an individual for matching, as not all of the variables needed for successful matching are collected, leading to many missing or mismatched records.This poses particular challenges for time-to-event analyses using historical census data linked with administrative death records.First, the event time may not be observed for some subjects.As a retrospective analysis, it is unknown whether the unobserved event times are due to a failed linkage with a vital record, or the individual being alive at the time of analysis.Second, the linkage process itself is prone to error and may result in multiple matches and false matches, particularly if the linkage variables available are insufficient for uniquely identifying an individual.Many methods exist for handling the former issue of missing data in survival analysis, and a handful are equipped for addressing the second challenge, but to our knowledge methods have not been developed for addressing both simultaneously. Methods for handling missing survival times assume a censoring framework for the missing events.With rightcensoring, the individual is lost to follow up before the event has occurred.The presence of censoring in timeto-event data is often dealt with by including censored individuals in the likelihood estimation procedure up until the time at which they are lost to follow-up.Such an approach is used in nonparametric Kaplan-Meier estimators, semi-parametric Cox proportional hazards regression, and parametric survival models such as the accelerated failure-time (AFT) model.However, in our context, the census date is the only point of observed data collection for each individual and one that is arbitrarily assigned relative to each person's timeline.Thus, right-censoring on this date may offer little additional information compared to limiting analysis to only completely observed records. Missing event times using historical data may also be treated as interval-censored, where the event is known to have occurred between two observed time points for an individual.Methods for this setting include cruder approaches such as imputing the event time at the beginning, midpoint or end of the interval [6].However, this can lead to biased inference [7], particularly if the interval is large.Multiple imputation methods which make use of the information contained in the observed data are also used for interval-censoring [8][9][10].However, these methods are not readily applicable to our setting of survival analysis where the lifetime of an individual is of interest, as determined using census data linked with death records.While we may be willing to assume that all individuals have died at the time of the analysis (for example, if the census occurred 100 years prior to the date of analysis), this is a large time interval between the time of the census and analysis time for using interval censoring methods.Furthermore, the aforementioned methods for interval censoring require that the upper bound for the interval is fixed and known for each individual.In our setting, the upper interval must be determined ad hoc (for example, a fixed number of years post-census, or the date of analysis).Finally, for some of the proposed methods, the imputation is iterative when fitting Cox or failure-time models, and do not readily extend to studies where there is interest in estimating the median survival time.On the other hand, for older individuals, simply right-censoring at the date of census is a very conservative approach, as enough time may have elapsed that the event has certainly occurred before the date of analysis.Novel approaches are needed to handle this unique framework using historic census records. Methods for analyzing linked data should also account for uncertainty in the matching process, namely the potential for false or equivocal matches among the observed records.Failure to do so can lead to an underestimation of the variance and/or bias in model estimates [11,12].Note that we limit the scope of this work to random errors in observed matches, meaning the probability of a true linkage is independent of the linkage variables.Thus, we assume that failure to account for false matches impacts only the uncertainty around our estimates. In this report, we seek to compare methods for handling missing event times in survival analysis using linked historical census data.We explore the performance of right-censoring (on the census date), inverse-probability weighting of the complete data, and two multiple imputation methods for estimating both median survival and the association parameters in proportional hazards and failure-time models.We are particularly interested in the repurposing of restricted mean survival and conditional survival for multiple imputation of missing event times.To account for the uncertainty in the merging process, we incorporate probabilistic scores provided by the vital record agency in our analysis.We apply the methods to study the effect of occupational and non-occupational asbestos exposure on life expectancy in a historical cohort from Ambler, PA, based on the 1930 census. Ambler, PA was home to the nation's largest asbestos manufacturing plant from the early 1900s to the mid-1980s.Many residents in Ambler experienced daily exposure to large amounts of asbestos in the factory as well as in their neighborhood and inside their homes.Although the asbestos factory has been closed since 1988, disposal of asbestos-containing waste continued through the majority of the twentieth century, forming several large mounds containing over 1.5 million cubic yards of asbestos waste spread over 25 acres [13].This led to possible continuous community-level asbestos exposure through wind and water distribution channels for many years after.Several studies [14][15][16] have shown a clear link between exposure to asbestos and debilitating, often lifethreatening, diseases such as pulmonary fibrosis, lung cancer, and mesothelioma.While the effects of exposure on mortality due to asbestos-related diseases (ARDs) have largely been studied in occupational settings, less is known about mortality among non-occupationally and environmentally exposed individuals.In this historical cohort study, census data were linked with death records obtained through matching with Ancestry.comand the National Death Index (NDI), however, there was substantial ascertainment bias in identifying death records thus motivating this work [17]. In the next section, we describe the time-to-event setting using historical census data with missing event times, followed by the proposed methods to impute the missing data.Then we perform a simulation study of the methods described, comparing them to a gold standard analysis where the outcomes are fully observed, as well as a complete case only data analysis.We then apply the methods to characterize asbestos-associated mortality in a historical cohort from Ambler, PA, and conclude with a discussion of the results. Setting We consider data where the outcome of interest is a time-to-event variable, T i .Let X i represent a binary expo- sure variable of interest, and Z i represent a covariate, where i = 1, . . ., n indexes the n individuals in the census cohort.In keeping with the format of historical census data, the time variable t ∈ (0, T i ) is defined on the scale of years since birth, and T i represents the lifetime of an individual.For each individual, one observation time, W i , occurs corresponding with the date of the census, such that W i < T i for all i .We also denote the time of analysis (end of study) as V i , which (like T i ) is defined using time since birth.Although the census and analysis dates are fixed calendar dates, such that V i − W i = c (a constant) for everyone, because our timescale is age starting at birth, W i and V i are specific to each individual.The true event indicator is denoted Following the framework of Goldstein et al. [11], we have a primary data file, known as the file of interest (FOI), that contains linkage variables, exposure X i and the covariate of interest Z i .We also have a second- ary linkage data file (LDF), which contains linkage variables and an event time (which may or may not be the true event time) for those who are matched.Ideally, if linkage with all death records were successful, we would observe all event times T i < V i in the LDF, and right- censor those who were not matched with records in the LDF at time V i .However, in our setting, there is imper- fect linkage.We, therefore, introduce a matching indicator, R i ∈ {0, 1}, where R i = 1 if record i from the FOI is matched to a record in the LDF, and R i = 0 if there is no match.To the investigator, it is unknown whether R i = 0 is due to failed linkage with a death record (i.e., if in fact δ i = 1 but no match was found) or because the event has not yet occurred ( δ i = 0).This is illustrated more clearly in Fig. 1 below. Furthermore, there is uncertainty in the linkage process, as the wrong record in the LDF file may be selected as a match for the FOI record.We denote the event time in the matched LDF record as T * i , which may or may not be equal to T i .We therefore distinguish between une- quivocal matches, in which there is a high probability that T * i = T i , and equivocal matches, where the equality is uncertain.Often, matched records from an LDF are accompanied by a probabilistic score, representing the probability that a record matched with i is a true match, denoted as p i,match = Pr T * i ∈ (T i − ǫ, T i + ǫ) .This probability ranges between 0 and 1.Thus, the data vector we observe for each individual is either We make two key assumptions within this framework.First, we assume that no one alive at the time of analysis (that is, T i ≥ V i ) is matched with a record in the LDF.Second, we assume that only one match is observed for any individual, corresponding to the record with the highest probabilistic score. Models and parameters of interest Our primary interest is the estimation of the following quantities: First, the median survival time, defined as the value of t for which S (t) ≤ 0.5; Under perfect record linkage, it is estimated as the earliest time at which the Kaplan-Meier curve, a nonparametric estimator of survival distribution over time, falls at or below 50% survival.We seek to estimate median survival within exposure-group X i = {0, 1} , denoted by M X , and covar- iate-specific median survival times within subgroups defined by Z i = 0 and Z i = 1 , denoted by M ZX .Thus, we have M x = min(t) : S KM (t|X i = x) ≤ 0.5 , and M xz = min(t) : S KM (t|X i = x, Z i = z) ≤ 0.5 .Secondly, we are interested in estimating the parameters of association between X i and T i when adjusting for Z i , including: (1) the log-hazard ratio for exposure X i , represented by β 1 in the Cox proportional hazards (Cox PH) model, where no parametric form is assumed for the baseline hazard, 0 (t) , and (2) the log event-time ratio for X i , represented by α 1 in the accelerated failure-time (AFT) model, where ǫ follows an extreme value distribution (i.e.f (ǫ) = exp(ǫ − exp(ǫ)) , and it is assumed that. In Eqs. 1 and 2 above, p is a shape parameter, and γ = exp(−(α 0 + α 1 X i + α 2 Z i )) p is the scale parameter.α 1 can be interpreted as the log event-time ratio for being in the exposed group ( X i = 1) compared to the unexposed group ( X i = 0 ).Note that under the Weibull distribution, α 1 in the AFT model has a direct relationship to β 1 , the log-hazard ratio from the proportional hazards model: Missing data methods We compare the performance of various methods for estimating M 0 , M 1 , M 00 , M 01 , M 10 , M 11 , β 1 and α 1 in the presence of missing event times due to imperfect linkage, assuming missing-at-random (MAR) and missing-completely-at-random (MCAR) mechanisms, where respectively.We also seek to account for the uncertainty associated with equivocal matches.Note, we do not address the case where ( 1) (3) Fig. 1 Survival framework for analysis of lifetime data using census information missingness depends on unobserved data, which may include the missing event times, as this would require additional assumptions on the missing-not-at-random (MNAR) process, which is outside of the scope of this paper.The methods we consider in the current report are divided into non-imputation and imputation-based methods.The non-imputation approaches include weighted and unweighted complete-case analysis.For imputation methods, we investigate multiple imputations based on the restricted-mean (MIRM) function of the survival time and the conditional survival function (MICS).We describe each of the approaches below. Complete-case and IPW A complete-case approach involves restricting the analysis to individuals for whom T i is observed ( R i = 0 ) and the match is unequivocal (i.e., p i,match ≥ P ∈ [0, 1] , where P is a chosen threshold for the certainty of the match).In the MCAR setting, we expect a complete-case analysis to yield unbiased, but inefficient estimates for β 1 , while in the MAR case, a complete-case approach may result in bias. Inverse probability weighting (IPW) We can extend the complete case approach to include individuals who have both unequivocal and equivocal matches (i.e.p i,match < P ∈ [0, 1] ).A weighted analysis is then performed, where the contribution of each observation to the estimator is weighted by the inverse of the estimated propensity for missingness, Pr(R i = 0|X i , Z i ) −1 , as well as the probabilistic score, p i,match .The weights take the following form: The above weights account for both the MAR and MCAR process that determine if a match is observed, and the uncertainty associated with a potential mismatch.Little and Rubin [18] showed that IPW would lead to unbiased estimates of β 1 in the case of MAR.For this and the complete case approach, we do not consider censoring, as the data points are limited to those with observed matches/event times ( R i = 1). Censoring atW i One way to make use of the full dataset, including true matches, equivocal matches, and non-matches, is to right-censor all unmatched individuals (that is, those with unobserved death times, or R i = 0 ) at their last observed follow-up during the study, which is, in this case, the census date, W i .The validity of this approach requires that censoring be unrelated to the failure time, T i (4) (i.e.non-informative censoring) [19].Since W i occurs on a fixed date, irrespective of T i or any characteristics of the individuals, this assumption is reasonable. Multiple imputation methods Imputation is another means of including all data points in the analysis, using imputed survival times in place of the missing survival times.In a multiple-imputation procedure, multiple (we denote this number as B ) datasets are created by imputing the missing event times B times, according to an assumed model for the missing values. With the imputed data, we obtain B estimates of median survival and log-HR, which are combined using Rubin's rules [20]. In our framework of imperfect linkage, we impute event times both for individuals with no match, as well as those with equivocal matches (i.e., those with a probabilistic score, p i,match < P ∈ [0, 1] ).Once the event times have been imputed, model estimation proceeds using both the observed and imputed data.This means that individuals who were matched equivocally ( p i,match ≤ P) appear twice in the analytic data set: once using the matched event time, and another using the imputed event time.The matched event time will receive a weight of p i,match in model estimation, while the imputed event time receives a weight of (1 − p i,match ) .Individuals who were not matched (missing an event time) will contribute only their imputed event time to the likelihood with a weight of 1. We investigate two multiple-imputation models for the missing and equivocal survival times: multiple imputation of the restricted mean (MIRM) and multiple imputation of conditional survival (MICS). Recall, the restricted mean survival time (RMST) is the expected or mean value of min (T i , τ ), where τ is a pre- specified time limit of interest.RMST is represented as the area under the survival curve up to time τ , Equation 5 can be thought of as the average life expectancy over a fixed time interval, (0, τ ) , as opposed to a more general interpretation of mean survival that does not account for temporal differences in event-time distribution [21].Imputing mean survival restricted to τ is of interest in our study context, as we would not expect persons to live beyond a certain age, for example, 100 years.Furthermore, Liu, Murray, and Tsodikov [22] introduced an algorithm for imputing RMST as a function of covariates.The algorithm first fits a modified AFT model to the complete observations (those with R i = 0 ) that accounts for the restricted mean structure, as follows ( 5) S(t)dt With the imputation proceeds on the scale of log(min(T i , τ )) .For each of the imputed datasets, log(min(T i , τ )) i=1 are generated from a multivariate normal distribution with mean equal to the fitted values from RMST model and the corresponding covariance matrix. For MICS approach, we recall that conditional survival is defined as the probability of surviving a further u years, having survived up to time t .This is different from overall survival, which refers to the probability of surviving to t years from time 0. Conditional survival, denoted as S C (u + t|t) , is evaluated as In the context of missing data, this distribution is useful for imputing event times conditional on surviving to time t [23].We seek to impute using its related cumulative distribution function (CDF), Since all study participants were observed at the date of the census, we could impute the missing death times conditional on having survived to time W i .We estimate conditional survival probabilities using the observed data under a Weibull AFT working model.Specifically, S(T i |X i , Z i ) = exp(−γ i T p i ) , where With this distribution, we can impute any percentile of the CDF using probability integral transformation.We randomly generate percentiles q i as Uniform(0,1) and impute the missing death time, calculated as u i + W i , as follows: The imputed event times can all be treated as observed, or we can apply the right-censoring at the time V i for those with imputed time T imp i > V i , to mimic a gold- standard analysis where all T i ≤ V i are observed and T i > V i are censored.We use the latter approach in our simulations and data application. The approaches described above are summarized in Table 1 below. Simulation study Design We conduct a simulation study to evaluate the performance of the 5 missing data methods described (CC, IPW, CENS, MIRM, MICS) on the estimation of covariate-specific median survival (i.e.median survival within subgroups defined by X i and Z i , denoted as ( M 00 , M 01 , M 10 , M 11 ) ), covariate-averaged median survival (median survival for X i = 0 and X i = 1 , averaged over the distribution of the covariate Z i , denoted as (M 0 , M 1 ) ) and the effect param- eters from the Cox PH ( β 1 ) and Weibull AFT ( α 1 ) models.Data are simulated to reflect the historical census setting where everyone in the study population is observed at the date of the census, but event times are MCAR or MAR for a subset of individuals.The analysis date is set to occur 50 ( 9) Weights for unequivocal matches Weights for equivocal matches Weights for nonmatches Complete case (CC) Inverse probability weighting of all matches (IPW) Censoring nonmatches at W i (CENS) Multiple imputation for equivocal matches and nonmatches (MIRM and MICS) years after the census date, thus for the simulation we have V i − W i = 50 years.The performance of the missing data methods is evaluated in comparison to a gold-standard analysis, in which we observe all death times that occur before V i , and those still alive at V i are right-censored at V i . We denote this gold-standard analysis as 'Fully Observed' .We consider five settings, where survival and/or missingness may depend on the exposure of interest, X i , or covariate Z i , or both X i and Z i .If missingness depends on Z i only, while the outcome model includes X i only, then missingness is MCAR.However, if Z i is also pre- dictive of survival, or both the survival and missingness depend on X i , then missingness is MAR.Specifically, ), where X i ∼ Binomial(0.5) and Z i ∼ Binomial(0.5) .The missingness indicator is generated as Values for α are chosen to reflect possible lifetime dis- tributions in an association study comparing a healthy population to an exposed population.In settings where survival time and/or missingness depend on only one variable, the parameter corresponding to the excluded variable is set to 0. This is described in more detail in Table 2 below.Age at the time of the census is generated as W i ∼ Uniform(0, T i ). We further introduce some random error to the matching process in the form of measurement error, using a randomly generated probabilistic score.A probabilistic score, p i,match is produced for all observed matches and follows a Beta (8,2) distribution.For those with p i,match > 0.8 (i.e.true or unequivocal matches), we set the matched event time to be equal to their true event time (i.e.T * i = T i ) .For those with p i,match < 0.8 we introduce error to the matched event time as: where φ i ∼ N 0, 1.8 1/p i,match 2 .Thus, the smaller p i,match is, the greater the measurement error.The fifth simulation (10) T * = T i + φ i , setting modifies the p i,match distribution to be dependent onZ i , such that the likelihood of an unequivocal match is lower whenZ i = 1 .This is to reflect real world settings where the quality and accuracy of linkage variables may vary based on individual characteristics (for example, name changes for married women, or a lack of available data for foreign-born individuals). In all settings, we include both X i and Z i in the model for the censoring weights in IPW.We assume the correct specification of the final survival models by including the same variables in imputation and analysis as we use in data generation.For MIRM, values for τ (80 and 120) were selected that were (1) sufficiently different so as to show sensitivity of performance to τ and (2) were near to the median and upper bound, respectively, of the empirical distribution of survival times generated (reported in Table 2 above). Cox PH and AFT models are fit using the survival::coxph() and survival::survreg() functions in R respectively. In each of the four settings, we perform K=500 simula- tions.For the k th iteration, a dataset of size n = 1000 is generated, and estimates for the parameters of interest, denoted as 0 , are obtained using each of the following: the fully observed data (gold-standard), complete cases only (without weighting), IPW, CENS, MIRM, and MICS.Empirical mean bias is calculated for β 1 , α 1 , M 0 , M 1 , M 00 , M 01 , M 10 , M 11 and M 1 − M 0 overall K iterations with respect to the gold-standard estimates, as well as empirical standard errors for β 1 and α 1 .Model- based standard errors for β Association between exposure and outcome Simulation results for the exposure-outcome association parameters (the log hazard ratio and log event time ratio) can be found in Figs. 2 and 3, which show that the relative performance of the missing data methods varies based on the model used and the setting.When fitting a Cox PH model, both the weighted (IPW) and unweighted complete case analyses underestimate β 1 under all MCAR and MAR settings (Fig. 2) when compared to the fully observed 'gold-standard' analysis, as the IPW only improved efficiency.Censoring at W i produces unbiased estimates of β 1 when missingness is MCAR or MAR with dependence on covariate Z i only.However, when missingness is influenced by the exposure variable X i , censoring at W i overestimates β 1 .Imputing based on con- ditional survival (MICS) reduces bias in all four settings and produces narrower confidence intervals compared to censoring, complete case analysis or IPW.Results for MIRM vary substantially based on the value of the upper bound τ , with the less restrictive bound ( τ = 1 20 years) yielding less biased estimates compared to τ = 80. Performance of the methods when estimating α 1 from an AFT model (Fig. 3) contrast sharply from their Cox model results.IPW and the unweighted complete case analysis produce the least biased estimates of α 1 .Fur- thermore, IPW improves precision in comparison to the unweighted analysis, with similar efficiency gains as MICS.MICS again produces estimates with low bias, comparable with IPW, but with wider confidence intervals.Censoring at W i leads to severe bias when estimating α 1 in all settings.Conversely to the Cox model results, MIRM performed better with the higher bound ( τ = 120) compared to τ = 80 in all settings except for when p i,match depends on Z i . Median survival times In the simulation results for median survival times (Figs. 4 and 5), MICS most consistently results in low bias when estimating median survival within exposure groups X i = 0 and X i = 1 , as well as covariate-depend- ent median survival (i.e., within subgroups defined by both Z i and X i ) .This method produces estimates close to the fully observed, gold-standard approach, in all MCAR and MAR settings.It is, however, outperformed by MIRM with large τ when estimating M X in MAR set- tings.The IPW approach also reduces bias compared to the complete case analysis but is outperformed by MICS.Censoring at W i reduces bias in exposure-specific median survival, but results in greater bias for more disaggregated estimates.Note that regardless of method, the bias in estimating M 0 increases when p i,match is depends on Z i . Sensitivity analysis A sensitivity analysis was performed to better understand the importance of imputation model specification on the performance of the multiple imputation approaches 2) is MAR with both T i and R i dependent on X i only, (3) MAR with T i dependent on X i and Z i , while R i depends on X i only, (4) MAR with both T i and R i dependent on X i and Z i Fig. 3 Empirical bias and model-based confidence intervals for α 1 : (1) MCAR, ( 2) is MAR with both T i and R i dependent on X i only, (3) MAR with T i dependent on X i and Z i , while R i depends on X i only, (4) MAR with both T i and R i dependent on X i and Z i Fig. 4 Empirical bias for M 0 and M 1 : (1) MCAR, (2) MAR with both T i and R i dependent on X i only, (3) MAR with T i dependent on X i and Z i , while R i depends on X i only, (4) MAR with both T i and R i dependent on X i and Z i (MIRM and MICS).We used misspecified models for imputation, including one that omitted covariate Z i , and one that had an interaction between X i and Z i .The results (Figures A.1.-A.4 in the appendix) suggest that for all methods, the Cox-based hazard ratio as well as median survival can be biased under imputation model misspecification, while the AFT-based hazard ratio was more robust to misspecification.Greater bias was observed as a result of covariate omission as opposed to inclusion of an interaction term. Application to historical ambler cohort data A historical cohort of individuals living in Ambler, PA was derived from 1930 census data.The cohort was created to study the effects of occupational, paraoccupational, and environmental asbestos exposure on life expectancy.Data on 4,514 adult residents from the 1930 census was publicly available on Ancestry.com,including individual demographic information: name, address, household identifier, household members, birth year, birthplace, race, sex, and occupation.Individuals were classified as having occupational exposure to asbestos if their listed place of work was one of the following: asbestos, shingles plant, shingle mill, chemical plant, chemical works, chemical, chemical manufacturer, mill.Paraoccupational exposure, a form of non-occupational exposure, was defined as having the same residential address as an individual with occupational exposure.For individuals without a listed house number, exposure was classified based on the listed familial relationship to the occupationally exposed individual (e.g., wife, son, daughter). The outcome of interest was overall mortality, with survival time operationalized as the age of death.The vital status of the individuals in the cohort was first obtained through searches on Ancestry.com,which features mortality data from a variety of death-related archives, the primary of which include Pennsylvania Death Certificates, U.S. Social Security Death Index, and the U.S. Grave Index.For individuals whose death data could not be fully identified through Ancestry.com,attempts were made to match them with National Death Index (NDI) records using additional identifiers such as social security numbers.Note that the NDI only contains information on deaths from 1979 onwards.Where discrepancies in death record dates occurred, the NDI record was used if the probabilistic matching score variable (a measure of the quality of matching provided by NDI [24] exceeded 30. To estimate the median survival time and association parameters for the occupational and para-occupational asbestos exposure on life expectancy, Kaplan Meier curves, Cox PH and Weibull AFT models were fit using a complete-case analysis, IPW, MIRM and MICS.Analysis models adjusted for age, sex, race, and place of birth (U.S. vs. non-U.S.).For inverse probability weighting, the propensity scores for missingness were modeled as a function of birthplace, race, sex, and age.Probabilistic Fig. 5 Empirical bias for M 00 , M 01 , M 10 and M 11 : (1) MCAR, (2) MAR with both T i and R i dependent on X i only, (3) MAR with T i dependent on X i and Z i , while R i depends on X i only, (4) MAR with both T i and R i dependent on X i and Z i matching scores from the NDI were transformed to the 0,1-scale.On the new scale, a score of 1 was considered an unequivocal match.Death dates identified through Ancestry.comwere also treated as true matches (probabilistic score of 1).sA total of 4507 individuals were included in the analysis with complete covariate information, in which 87.5% of individuals were of white ethnicity and 12.4% were black, 49.3% were female and 15.9% were born outside of the U.S. The average age was 29.6 years (± 20.3 years).10.5% of individuals were occupationally exposed to asbestos, while 36.2% had para-occupational exposure.Overall, death dates were identified for 2,440 individuals (54% of the cohort).Population characteristics stratified by exposure type and event time observation are summarized in Tables 3 and 4. As observed by Wortzel et al. [17] and confirmed by our results in Table 4, ascertainment bias for death-related data exists for this cohort, as those who were U.S. born, older, male, and of white ethnicity were more likely to have their death dates identified.Being male, U.S. born, white and occupational exposure were also associated with higher probabilistic matching scores (that is, better quality matches).These groups were also less likely to be occupationally or paraoccupationally exposed to asbestos (see Table 3).Assuming ascertainment was unrelated to life expectancy, we sought to implement the aforementioned methods in handling this MAR problem. Table 5 suggests that the probabilistic scores are associated with individual characteristics.Since this impacts the relative performance of MIRM with different τ (as shown in simulations) and given that median and maximum survival times among true matches are 71.77and 109.8 years respectively, both τ = 80 and τ = 110 were used in implementing the MIRM method. Table 6 shows the median survival estimates for the overall cohort and within groups defined by occupational exposure, para-occupational exposure, race, and sex.We observe that the median survival was lower for black residents compared to white residents and for males compared to females.Overall and within groups, the median survival times were lower among individuals who were occupationally exposed or para-occupationally exposed, compared to those who were unexposed.In all groups, MIRM produced the lowest estimated median survival. Further analysis using semi-parametric Cox PH models (Table 7) and parametric AFT models (Table 8) revealed that the observed differences in survival by para-occupational exposure were non-significant, except for the MIRM result for black residents.A significant overall effect of occupational exposure on survival was observed using IPW under the Cox PH model.Similar results were observed among the black subpopulation and male subpopulation, with the impact of occupational exposure being most severe for black.In all subgroups, MIRM estimates deviated sharply from the other methods, though not in a consistent direction.When fitting an AFT model, estimates for event time ratios for the effect of occupational or para-occupational exposure only reached statistical significance with MICS and MIRM in the black subpopulation, and with MIRM among female individuals.Overall, this illustrates the benefit of improved efficiency of IPW when accounting for the missing-data mechanism, as observed in simulations. We assessed the quality of event time imputation using MIRM and MICS in Appendix tables A.5 and A.6.Findings showed that MICS overestimated survival times, while MIRM approaches produced a narrower range of event times.As discussed in the simulation study, this may suggest we failed to capture some unmeasured predictor(s) of life expectancy in the imputation model, though the AFT-based hazard ratio estimates should be minimally biased with this misspecification. Discussion Historical census data linked to administrative records can be a useful resource for epidemiological studies, particularly for associations between exposures and outcomes with historical significance or, as in our use case of asbestos exposure, long incubation periods before population effects can be observed.However, differential success in identifying death records based on individual characteristics can threaten the validity of results.In this paper, we considered the use of historical census data and death records in time-to-event modeling, where death dates may be missing for some individuals.We explored the application of various censoring, weighting, and imputation approaches for handling missing event times, in comparison to a gold-standard approach which assumes that all events that occurred before the date of analysis have been observed.We additionally used weighting to account for the uncertainty associated with equivocal matches. We show that for estimating log HRs from a Cox PH model, a naïve analysis using only the complete records (weighted (IPW) or unweighted) can lead to biased estimates for the log HR, while censoring on the date of the census can produce unbiased estimates only if the missingness mechanism is independent of the exposure variable of interest, causing severe bias otherwise.Imputing event times based on the conditional survival distribution can be useful for fitting Cox PH models, where point estimates are more robust to the missingness mechanism compared to censoring on the census date.MICS similarly results in the least bias when fitting AFT models, while censoring produces severely biased estimates in all settings.Regarding the precision of the estimates, IPW achieves the greatest efficiency for fitting AFT models (while being minimally biased), while imputation based on conditional survival was most efficient when fitting Cox PH models.Imputation based on conditional survival was also found to be the most accurate among the methods for estimating median survival.MIRM similarly reduced bias when estimating median survival, but the method's performance was the least consistent, resulting in large bias when linkage quality is covariate-dependent, but minimal bias otherwise.Furthermore, the setting of τ is not straightforward.τ set close to the maximum of the distribution led to low bias relative to a smaller τ , but performed poorly when the matching score was dependent on Z i .performed bet- ter in Cox regression, but higher τ was preferred for the Our empirical study was not without limitations.Firstly, we assumed the correct specification of survival and imputation models, and that all variables that may impact missingness and/or survival were correctly measured and observed.If missingness is related to variables not collected at the time of the census, or time-varying variables, this may impact our findings, particularly for inverse-probability weights.We also showed the sensitivity of imputation methods to predictor/covariate omission in the imputation model, with AFT model hazard ratios being most robust to misspecification.Furthermore, we did not consider possible interactions between the covariates and the exposure variable in the missingness or survival mechanisms.Robustness of the results to model misspecification should be investigated in future work.In addition, we did not vary the level of missingness and/or censoring in evaluating the performance of our methods.In our data application, we assumed that the observed variables were sufficient in accounting for differential ascertainment. Finally, we assumed non-informative censoring in our simulations and data application, meaning the censoring mechanism is independent of the time-to-event.However, in practice, this may not hold as life expectancy has increased substantially over the past century with advances in medicine, public health, and nutrition.Administrative record-keeping has also improved over the same time, resulting in greater linkage success for later birth cohorts, who also have longer survival.One way to account for this is by adjusting for calendar effects in survival models. Conclusions Future work should investigate extensions to differential missingness of exposure variables, which may also be found in studies with EHR and genomic data [27,28], or joint missingness of exposure and outcome variables.The performance of machine-learning approaches, such as random forests and k-nearest neighbor algorithms, can also be investigated for this setting.Finally, it should be emphasized that although we have proposed post-hoc measures to account for missing event outcomes, efforts to improve successful data linkages, such as the creation of more centralized databases, or control measures to promote consistency in the quality of data across sources, are preferable. 1 are obtained from the outputted covariance matrices of the coxph and survreg functions in R, respectively. Fig. 2 Fig. 2 Empirical bias and model-based confidence intervals for β 1 : (1) MCAR, (2) is MAR with both T i and R i dependent on X i only, (3) MAR with T i dependent on X i and Z i , while R i depends on X i only, (4) MAR with both T i and R i dependent on X i and Z i Table 2 Simulation study design Table 3 Characteristics of the study population by exposure type (n = 4514) *Indicates statistically significant difference at the 0.05 significance level, based on Wilcoxon rank-sum test for age, and chi-square test for other variables Table 4 Characteristics of the study population by missing or observed death times (n = 4514) Indicates statistically significant difference at the 0.05 significance level, based on Wilcoxon rank-sum test for age, and chi-square test for other variables * Table 5 Probabilistic score distribution by individual characteristics for matches Table 7 Hazard ratio (HR) estimates from Cox PH model for occupational and para-occupational exposure, adjusting for age, sex and race.Figures in bold indicate statistically significant (p< 0.05) effects
2024-03-14T13:05:22.352Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "0c46caaee2ce82b970fff7795b1758f987f2730f", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/counter/pdf/10.1186/s12874-024-02194-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e837b0993778e23150af3105fa7cfb4dfb261d2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
119662760
pes2o/s2orc
v3-fos-license
Symmetric Liapunov center theorem for minimal orbit Using the techniques of equivariant bifurcation theory we prove the existence of non-stationary periodic solutions of $\Gamma$-symmetric systems $\ddot q(t)=-\nabla U(q(t))$ in any neighborhood of an isolated orbit of minima $\Gamma(q_0)$ of the potential $U$. We show the strength of our result by proving the existence of new families of periodic orbits in the Lennard-Jones two- and three-body problems and in the Schwarzschild three-body problem. Introduction The study of the existence of non-stationary periodic solutions of autonomous ordinary differential equations has a long history. Particular attention was paid to the study of the existence of such solutions in a neighborhood of isolated equilibria, see for instance [12,17,21,22,25,29] and references therein. Of course this list is far from being complete. One of the most famous theorems concerning the existence of periodic solutions of ordinary differential equations is the celebrated Liapunov center theorem. Let Ω ⊂ R n be an open and Γ-invariant subset of R n considered as a representation of a compact Lie group Γ. Assume that q 0 ∈ Ω is a critical point of the Γ-invariant potential U : Ω → R of class C 2 . Since for all γ ∈ Γ the equality U (γq 0 ) = U (q 0 ) holds and ∇U (q 0 ) = 0, the orbit Γ(q 0 ) = {γq 0 : γ ∈ Γ} consists of critical points of U i.e. Γ(q 0 ) ⊂ (∇U ) −1 (0). Note that if dim Γ ≧ 1 then it can happen that dim Γ(q 0 ) ≧ 1 i.e. the critical point q 0 is not isolated in (∇U ) −1 (0). That is why for higher-dimensional orbits Γ(q 0 ) we can not apply the classical Liapunov center theorem. In [26] we have proved the Symmetric Liapunov center theorem for non-degenerate orbit of critical points Γ(q 0 ) i.e. we have assumed that dim Γ(q 0 ) = dim ker ∇ 2 U (q 0 ). More precisely, with the additional hypothesis that the isotropy group Γ q 0 = {γ ∈ Γ : γq 0 = q 0 } is trivial and that there is at least one positive eigenvalue of the Hessian ∇ 2 U (q 0 ), we have proved the existence of non-stationary periodic solutions of system (1.1) in any neighborhood of the orbit Γ(q 0 ). Moreover, we are able to control the minimal period of these solutions in terms of the positive eigenvalues of ∇ 2 U (q 0 ). For the Lennard-Jones and Schwarzschild problems discussed in the last section there are isolated degenerate circles (Γ = SO(2)-orbits) of stationary solutions which consist of minima of the corresponding potentials. We underline that we are not able to study the non-stationary periodic solutions of these problems applying the classical Liapunov center theorem because these equilibria are not isolated. We also emphasize that since these orbits are degenerate, we either can not study the non-stationary periodic solutions of these problems applying the Symmetric Liapunov center theorem for non-degenerate orbit proved in [26]. Therefore there is a natural need to prove the Symmetric Liapunov center theorem for isolated orbits of minima. The inspiration for writing this article, in addition to the discussion above, was a nice paper of Rabinowitz [27], where the author proved that the Brouwer index of an isolated minimum of a potential of the class C 1 is equal to 1. This result was also proved later by Amann [2]. The goal of this paper is to prove the Symmetric Liapunov center theorem for an isolated orbit of minima of the potential U . Our main result is the following. To prove the above theorem we apply techniques of the (Γ×S 1 )-equivariant bifurcation theory. We present the problem of the existence of periodic solutions of system (1.1) as (Γ×S 1 )-symmetric variational bifurcation problem i.e. we look for periodic solutions of system (1.1) as a (Γ × S 1 )orbits of critical points of a family (Γ × S 1 )-invariant functionals defined on a suitably chosen orthogonal Hilbert representation of Γ×S 1 . As topological tools we apply the (Γ×S 1 )-equivariant Conley index due to Izydorek, see [18], and the degree for (Γ×S 1 )-equivariant gradient operators due to Gołȩbiewska and the second author, see [15]. More precisely, we have proved changes of the equivariant Conley index and the degree for equivariant gradient operators along the family Γ(q 0 ) × (0, +∞) ⊂ H 1 2π × (0, +∞) of stationary solutions of the following system   We emphasize that change of the Conley index implies the existence of a local bifurcation of periodic solutions of system (1.2), whereas a change of the degree implies the existence of a global bifurcation of periodic solutions of system (1.2) satisfying the Rabinowitz type alternative. In order to get an accurate model to study the action of the intermolecular and gravitational forces at the same time, many authors from physics, astrophysics, astronomy, cosmology and chemistry have introduced new kinds of potentials, with a structure different from the classical Newtonians and Coulombians potentials. In this way, potentials that have been used very often in those branches of the science are the Lennard-Jones and the Schwarzschild potentials. In the last section we apply Theorem 1.1 to the study non-stationary periodic solutions of the Lennard-Jones and the Schwarzschild N -body problems. After introduction our paper is organized as follows. In Section 2 we summarize without proofs the relevant material on equivariant topology and prove some preliminary results. Throughout this section G stands for a compact Lie group. Since admissible pairs play a crucial role in our reasonings, the notion of an admissible pair is given in Definition 2.1.1. In Definition 2.3.1 we introduce the notion of G-equivariant spectrum. G-equivariant Euler characteristic of G-spectrum is given by formula (2.3.2). The G-equivariant Conley index which we apply in this article is a G-homotopy type of a G-spectrum. In Theorem 2.4.2 we have described a relationship of the G-equivariant Conley index of the orbit G(x 0 ) and G x 0 -equivariant Conley index of the {x 0 } considered as an isolated critical point of the potential restricted to the orthogonal complement of T x 0 G(x 0 ) at x 0 . Theorem 2.4.3 is an infinite-dimensional generalization of a combination of Theorems 2.4.1 and 2.4.2. This theorem will play important role in the proof of the main result of our paper. In the last subsection of Section 2 we have proved the splitting theorem, see Theorem 2.5.2, which plays a crucial role in the study of isolated degenerate critical points. Section 3 is devoted to the proof of the main results of this article. The study of periodic solutions of any period of system (1.1) is equivalent to the study of 2π-periodic solutions of a family of systems, see (1.2). We have considered the solutions of system (1.2) as critical orbits of G = (Γ × S 1 )-invariant family of functionals Φ(q, λ) of class C 2 defined by formula (3.1.1). The necessary and sufficient conditions for the existence of local bifurcations of solutions of equation ∇ q Φ(q, λ) = 0 have been proved in Section 3.2, see Theorems 3.2.1, 3.2.2, respectively. In Section 3.3 we study the G q 0 -equivariant Conley index on the space orthogonal to the orbit G(q 0 ) at q 0 , see Lemmas 3.3.1, 3.3.2. In Subsection 3.4 we have proved the main result of this paper. Section 4 contains the illustration of the abstract result of our article. We apply Theorem 1.1 to prove the existence of non-stationary periodic solutions of the Lennard-Jones and Schwarzschild problems, whose potentials are Γ = SO(2)-invariant. Preliminary results In this section, for the convenience of the reader, we repeat the relevant material from [19,35] without proofs, thus making our exposition self-contained. Moreover, we prove some preliminary results. Throughout this section G stands for a compact Lie group. 2.1. Groups and their representations. Denote by sub(G) the set of all closed subgroups of G. Two subgroups H, H ′ ∈ sub(G) are said to be conjugate in G if there is g ∈ G such that H = gH ′ g −1 . The conjugacy is an equivalence relation on sub(G). The class of H ∈ sub(G) will be denoted by (H) G and the set of conjugacy classes will be denoted by sub [G]. Denote by ρ : G → O(n, R) a continuous homomorphism. The space R n with the G-action defined by G × R n ∋ (g, x) → ρ(g)x ∈ R n is said to be a real, orthogonal representation of G which we write V = (R n , ρ). To simplify notation we write gx instead of ρ(g)x and R n instead of V if the homomorphism is given in general. If x ∈ R n then a group G x = {g ∈ G : gx = x} ∈ sub(G) is said to be the isotropy group of x and G(x) = {gx : g ∈ G} is called the orbit through x. It is known that the orbit G(x) is a smooth G-manifold G-diffeomorphic to G/G x . An open subset Ω ⊂ R n is said to be G-invariant if G(x) ⊂ Ω for every x ∈ Ω. Two orthogonal representations of G, say V = (R n , ρ), V ′ = (R n , ρ ′ ), are equivalent (briefly For k, m ∈ N we denote by R[k, m] the direct sum of k copies of R[1, m], we also denote by R[k, 0] the k-dimensional trivial representation of S 1 . The following classical result gives a complete classification (up to an equivalence) of finite-dimensional representations of S 1 , see [1]. Moreover, the equivalence class of V is uniquely determined by sequences {k i }, {m i }. Below we recall the notion of an admissible pair, which was introduced in [26]. Recall that if Γ is a compact Lie group, then the pair (Γ × S 1 , {e} × S 1 ) is admissible, see Lemma 2.1 of [26]. This property will play a crucial role in the next section. for every g ∈ G and x ∈ Ω. The set of G-invariant C k -potentials will be denoted by C k G (Ω, R). Definition 2.2.2. A map ψ : Ω → V of the class C k−1 is called G-equivariant C k−1 -map, if ψ(gx) = gψ(x) for every g ∈ G and x ∈ Ω. The set of G-equivariant C k−1 -maps will be denoted by C k−1 G (Ω, V). Fix ϕ ∈ C 2 G (Ω, R) and denote by ∇ϕ, ∇ 2 ϕ the gradient and the Hessian of ϕ, respectively. For x 0 ∈ Ω denote by m − (∇ 2 ϕ(x 0 )) the Morse index of the Hessian of ϕ at x 0 i.e. the sum of the multiplicities of negative eigenvalues of the symmetric matrix ∇ 2 ϕ(x 0 ). Equivariant Conley index and equivariant Euler characteristic. Denote by F * (G) the category of finite pointed G-CW-complexes, see [35], where morphisms are continuous Gequivariant maps preserving base points. By F * [G] we denote the set of G-homotopy types of elements of F * (G), where [X] G ∈ F * [G] (or [X] when no confusion can arise) denotes a Ghomotopy type of the pointed G-CW complex X ∈ F * (G). If X is a G-CW-complex without a base point, then we denote by X + a pointed G-CW-complex X + = X ∪ { * }. A finite-dimensional G-equivariant Conley index of an isolated invariant set S under a G-equivariant vector field ϑ will be denoted as CI G (S, ϑ), see [4,11,13,31] for the definition. Recall that CI G (S, ϑ) ∈ F * [G], see [13]. Below we present the infinite-dimensional extension of the equivariant Conley index due to Izydorek [18] which requires the notion of equivariant spectra, see also [14,28]. Let ξ = (V n ) ∞ n=0 be a sequence of finite-dimensional orthogonal representation of G. The set of G-spectra of type ξ is denoted by GS(ξ). Two G-maps f, g : E(ξ) → E ′ (ξ) are G-homotopic if there exists n 1 ≥ n 0 such that f n , g n : E → E ′ are G-homotopic for n ≥ n 1 . Following this definition in a natural way we understand a G-homotopy equivalence of two spectra E(ξ), E ′ (ξ). The G-homotopy type of a G-spectrum E(ξ) will be denoted by [E(ξ)] G (or shorter [E(ξ)]) and the set of G-homotopy types of G-spectra by [GS(ξ)] or simply [GS] when ξ is fixed or is not known yet. is completely continuous. Denote by ϑ a G-LS-flow, see Definition 2.1 of [18], generated by ∇Φ. Let be O an isolating G-neighborhood for ϑ and put N = Inv ϑ O. Set ξ = (H + k ) ∞ k=1 . Let Φ n : H n → R be given by Φ n = Φ |H n and ϑ n denotes the G-flow generated by ∇Φ n . Note that ∇Φ n (x) = Lx + P n • ∇K(x). Choose sufficiently large n 0 such that for n ≥ n 0 the set O n := O ∩ H n is an isolating G-neighborhood for the flow ϑ n . Then the set Inv ϑn (O n ) admits a G-index pair (Y n , Z n ). We define a spectrum E(ξ) := (Y n /Z n ) ∞ n=n 0 . Then the equivariant Conley index of O with respect to the flow ϑ is given by . Sometimes we will write a vector field and isolated invariant set instead a flow and isolating neighborhood i.e. CI G (N , ∇Φ). Let (U (G), +, ⋆)) be the Euler ring of G, see [35] for the definition and properties of this ring. Let us briefly recall that the Euler ring U (G) is commutative, generated by is the universal additive invariant for finite pointed G-CW-complexes known as the equivariant Euler characteristic. Below we present some properties of the equivariant Euler characteristic χ G (·). • For X, Y ∈ F * (G) we have: The above equality has been proved in [20]. See [20] for more properties of the Euler ring U (S 1 ) and the Euler characteristic χ S 1 . There is a natural extension of the equivariant Euler characteristic for finite pointed G-CWcomplexes to the category of G-equivariant spectra due to Gołȩbiewska and Rybicki [16]. It was shown in [16] that Υ G is well-defined. In fact where n 1 (E) = n 1 (E(ξ)) comes from Definition 2.3.1. Remark 2.3.4. Note that a finite pointed G-CW-complex X can be considered as a constant spectrum E(ξ) where E n = X for all n ≥ 0 and ξ is a sequence of trivial, one-point representations. ). Therefore we can treat CI G and Υ G as natural extensions of CI G and χ G respectively. By Theorems 3.1, 3.5 of [16] we obtain the following product formula. sets for the local G-LS flows generated by ∇Ψ 1 and ∇Ψ 2 respectively then 2.4. How to distinguish two equivariant Conley indexes? Let (V, ·, · ) be a finite-dimensional orthogonal representation of G. Throughout this subsection Ω ⊂ V stands for an open and G-invariant subset. Fix a potential ϕ ∈ C 2 G (Ω, R) and x 0 ∈ (∇ϕ) −1 (0). Suppose that G(x 0 ) is an isolated orbit of critical points of ϕ. Our aim is to simplify the computation of the G-equivariant Conley index The orbit space is denoted by G + ∧ H Y and called the smash over H, see [35]. The following theorem gives an interesting and very useful relation between Euler characteristics of [Y] and [G + ∧ H Y]. The proof of this theorem can be found in [26]. In the theorem below we express the G-equivariant Proof. To simplify notations we put H = G x 0 . Firstly we express the G-index pair (N , L) of the orbit G(x 0 ) in terms of the twisted product over H of the H-index pair (N, [19]. Consequently we obtain the following equality Moreover the H-action on all sets above is the same. Therefore the H-orbit spaces with given G-action are G-homeomorphic i.e. and as a consequence which completes the proof. Let H = ∞ n=0 H n be a representation of G. Consider two functionals ϕ 1 , Theorem 2.4.3. Let G(x 1 ), G(x 2 ) be isolated orbits of critical points of the potentials ϕ 1 and ϕ 2 , respectively. Moreover, assume that G Proof. Since φ 1 , φ 2 are in the form of compact perturbation of the same linear operator, the Conley indexes CI H ({x 1 }, −∇φ 1 ) and CI H ({x 2 }, −∇φ 2 ) are the homotopy types of spectra of the same type ξ = (V n ) ∞ n=0 . Denote these spectra by E 1 (ξ) and E 2 (ξ) respectively. Choose for any n ≥ n 1 . The same reasoning can be performed for ϕ 1 and ϕ 2 obtaining similar formula for n ≥ n 2 and G-equivariant Euler characteristics. With all the above it is sufficient to show that Equivariant splitting lemma. Let H be a compact Lie group and let (V, ·, · ) be an orthogonal Hilbert representation of H with an invariant scalar product ·, · . Assume additionally that dim V H < ∞. Here and subsequently, Ω ⊂ V stands for an open and invariant subset of V such that 0 ∈ Ω. Consider a functional Ψ ∈ C 2 H (Ω, R) given by the formula which satisfies the following assumptions Denote by ker A and im A the kernel and the image of ∇ 2 Ψ(0) = A, respectively. Notice that both, ker A and im A, are orthogonal representations of H. Moreover, ker A is finite dimensional and trivial representation of H. The theorem below is an equivariant version of the implicit function theorem. A proof of the theorem below can be found in [12]. In the following theorem (known as the splitting lemma) we prove the existence of equivariant homotopy which allows us to study the product (splitted) flow (∇ϕ(u), Av) where u ∈ ker A, v ∈ im A instead of the general Ψ(x) = 1 2 Ax, x + ζ(x). Note that A is an isomorphism on im A and therefore the study the second flow is standard. Moreover, we are able to describe the first flow ∇ϕ(u), see Remark 2.5.1. Proof. First of all, we define a family of potentials H : where w is obtained from Theorem 2.5.1. This family of functionals was introduced firstly by Dancer [8]. Since we consider, contrary to Dancer, an infinite-dimensional and symmetric case, we check all the details in the proof of this theorem. Since A, ζ, w are H-equivariant and V is an orthogonal representation of H, the functional H(·, t) is H-invariant. Therefore ∇H is a gradient, H-equivariant homotopy, where ∇H denotes the gradient of H with respect to the coordinate x ∈ V. Observe that and that Q∇H((u, v), t) = Av + (1 − t)Q∇ζ(u, v + tw(u)). To finish the proof of Theorem 2.5.2 it is enough to notice that: , by the definition. main result In this section, using the equivariant bifurcation theory techniques, we prove the main result of this article i.e. the Symmetric Liapunov center theorem for minimal orbit, see Theorem 1.1. We consider R n as an orthogonal representation of a compact Lie group Γ. Denote by Ω ⊂ R n an open and Γ-invariant subset. Fix U ∈ C 2 Γ (Ω, R) and q 0 ∈ Ω a minimum of the potential U such that isotropy group Γ q 0 is trivial. Since U is Γ-invariant, the orbit Γ(q 0 ) consists of minima of U . Obviously q 0 is a critical point of U and therefore Γ(q 0 ) ⊂ (∇U ) −1 (0) i.e. the orbit Γ(q 0 ) consists of critical points of U . Remark 3.1. Since Γ q 0 is trivial, the orbit Γ(q 0 ) is Γ-homeomorphic to Γ/Γ q 0 = Γ. For this reason it can happen that elements of this orbit are not isolated. For example, if Γ = SO(2) acts freely on Ω then, the orbit Γ(q 0 ) is SO(2)-homeomorphic to Γ/Γ q 0 = Γ/{e} = SO(2) ≈ S 1 . Hence we can not treat q ∈ Γ(q 0 ) as an isolated critical point of U . This is the reason to apply equivariant Conley index theory. Note that the study of periodic solutions of any period of system (1.1) is equivalent to the study of 2π-periodic solutions of the following system   q (t) = −λ 2 ∇U (q(t)), q(0) = q(2π), q(0) =q(2π). Variational setting. In this article we treat solutions of (3.1) as critical points of invariant functionals. This fact allows us to use equivariant bifurcation theory in order to prove our main result. Therefore we present variational setting for family (3.1). Define H 1 2π = {u : [0, 2π] → R n : u is abs. continuous map, u(0) = u(2π),u ∈ L 2 ([0, 2π], R n )} and a scalar product where (·, ·) and · are the usual scalar product and norm in R n , respectively. It is well known that H 1 2π , ·, · H 1 2π is a separable Hilbert space. Moreover, it can be considered as an orthogonal representation of G = Γ × S 1 where the action is given by , q(t)) → γq(t + θ) mod 2π. It is known that solutions of system (3.1) are in one to one correspondence with S 1 -orbits of critical points of S 1 -invariant potential Φ : where λ is considered as a parameter, see [22]. As R n is an orthogonal representation of Γ and U is Γ-invariant, the potential Φ is also Γ-invariant. Therefore 2π-periodic solutions of system (3.1) can be considered as critical orbits of G = (Γ × S 1 )-invariant potential Φ i.e. as solutions of the system ∇ q Φ(q, λ) = 0. Let {e 1 , . . . , e n } ⊂ R n be the standard basis in R n . Define H 0 = R n , H k = span{e i cos kt, e i sin kt : i = 1, . . . , n} and note that and that the finite-dimensional spaces H k , k = 0, 1, . . . are orthogonal representations of G. Note that the gradient ∇Φ : H 1 2π × (0, ∞) → H 1 2π is a G-equivariant C 1 -operator in the form of a compact perturbation of the identity, see [26] for more details. Summarizing, we will study the existence of G-orbits of critical points of Φ. Let us underline that Φ satisfies assumptions The corresponding functional Ψ : H 1 2π → R is defined as follows where L : H 1 2π → H 1 2π is a linear, self-adjoint, G-equivariant and compact operator, see [26] for details. It is clear that ∇ q Ψ(q, λ) = q − Lq + λ 2 ∇ 2 U (q 0 )q 0 . Proof. Suppose, contrary to our claim, that ker is an isomorphism, applying Theorem 2.5.1 we obtain that (q 1 , q 2 , λ) = (q 1 , q 2 (q 1 , λ), λ) is the only solution of (3.2.3) in the neighborhood of (q 0 , λ 0 ), where Therefore the study of equation (3.2.1) is equivalent to study the equation Now we are able to control the isotropy group of solutions of equation (3.2.1) in the neighborhood of the orbit G(q 0 ) × {λ 0 }. Indeed, suppose that ( q 1 , q 2 ( q 1 , λ), λ) is a solution of (3.2.1). Since q 2 is G-equivariant we obtain that G q 1 = G ( q 1 , λ) ⊂ G q 2 ( q 1 , λ) and therefore i.e. the isotropy groups of bifurcating solutions must coincide with isotropy groups of elements of ker ∇ 2 q Φ(q 0 , λ 0 ). Since G(q 0 ) = Γ(q 0 ) ⊂ H 0 is an isolated orbit of constant solutions, the bifurcation can not occur in the direction of H 0 . Finally, since ker ∇ 2 q Φ(q 0 , λ 0 ) ⊂ H 0 then G(q 0 ) × {λ 0 } is not an orbit of local bifurcation, a contradiction. To complete the proof it is enough to show that ker ∇ 2 q Φ(q 0 , λ 0 ) ∩ ∞ k=1 H k = ∅ if and only if λ 0 ∈ Λ. The study of ker ∇ 2 q Φ(q 0 , λ 0 ) is equivalent to the study of the linearized system (3.1.4) and further, it is equivalent to the equation ∇ q Ψ(q, λ) = 0. By the equality (3.1.6), the last equation has solutions in ∞ k=1 H k if and only if a matrix Q(k, λ) = is degenerate for some k ∈ N i.e. k = λβ for some β 2 ∈ σ(∇ 2 U (q 0 )) ∩ (0, ∞). The theorem below provides the sufficient condition for the existence of local bifurcation in the terms of equivariant Conley index. This is a direct consequence of continuation property and homotopy invariance of equivariant Conley index, see [18]. Equivariant Conley index on the orthogonal section. In order to prove Theorem 1.1 we will study the existence of local bifurcation of solutions of equation (3.1.2) from the trivial family T . Additionally, we will control the minimal period of bifurcating solutions. The existence of local bifurcation will be a consequence of the change of the infinite-dimensional equivariant Conley index. Fix β j 0 satisfying the assumptions of Theorem 1.1, choose ε > 0 sufficiently small and define Without loss of generality one can assume that [λ − , λ + ] ∩ Λ = {1/β j 0 }. Since bifurcation does not occur at the level λ ± , the orbit G(q 0 ) is isolated in (∇Φ(·, λ ± )) −1 (0). 2π is an isolated critical orbit of the G-invariant functional Φ(·, λ ± ), q 0 ∈ H is an isolated critical point of S 1 -invariant potential Ψ λ ± . Hence q 0 is an isolated invariant set in the sense of the S 1 -equivariant Conley index theory defined in [18] i.e. CI S 1 ({q 0 }, −∇Ψ λ ± ) is defined. Let us underline that since we know the form of Φ, Ψ λ ± satisfies the assumptions (F.1)-(F.6) given in Subsection 2.5. Note that Put P : H → N and Q = Id − P : H → R for the H-equivariant, orthogonal projections. Note that the S 1 -invariant potential Π λ ± : R → R of the linear vector field In the following lemma we reduce the computation of the S 1 -equivariant Conley indexes of nonlinear maps to the linear case. Since N ⊂ H S 1 is finite-dimensional, applying Remark 2.3.4 we obtain . i.e. the S 1 -equivariant Euler characteristic of CI S 1 ({0}, −∇ϕ λ ) is generated by the identity in the Euler ring U (S 1 ). There exists a simple relation between the Euler characteristic of the Conley index χ(CI(S, −η)) of an isolated η-invariant set S with an isolating neighborhood N and the Brouwer degree deg(ν, N ), where η is a local flow generated by the equationẋ = −ν(x). In fact χ(CI(S, −ν)) = deg(ν, N ), see [23,32] for details. Rabinowitz proved in [27] that the Brouwer index of an isolated critical point which is a minimum is equal to 1, see also [2]. This implies that the Brouwer index of an isolated maximum equals (−1) k , where k is the dimension of a space. Hence, As a consequence of the above equality we obtain 3.2) which completes the proof. Applications In order to show the strength of our main result i.e. the Symmetric Liapunov center theorem for minimal orbit, see Theorem 1.1, in this section we apply it to the study of periodic solutions of the Lennard-Jones and Schwarzschild 2-, 3-body problems with Γ = SO(2). Consider (R 2 ) N as an orthogonal representation of SO(2) with SO(2)-action defined by Define an open SO(2)-invariant subset Ω = {q = (q 1 , . . . , q N ) ∈ (R 2 ) N : q i = q i for i = j} and note that if q 0 ∈ Ω then the isotropy group SO(2) q 0 is trivial. Recall that σ(S) stands for the spectrum of a symmetric matrix S and denote by mult(α) the multiplicity of an eigenvalue α ∈ σ(S). 4.1. The Lennard-Jones problem. The Lennard-Jones potential is used to model the nature and stability of small clusters of interacting particles in crystal growth, random geometry of liquids, and in the theory of homogeneous nucleation, see for instance [36]. The potential also appears in molecular dynamics to simulate many particles systems ranging from solids, liquids, gases and biomolecules of Earth. It is well known in chemistry and chemical physics that the stability of some molecular structures are closely related with the local minima of the corresponding potential. Also in the analysis of the native structure of a protein, it is necessary to find the lowest energy configuration of a molecular system. In general when it is possible to find the global minimum of a potential energy surface, we can get a global optimization of the problem, saving money and laboratory time in this way. Unfortunately, this is a difficult task in general, and the researchers on the subject develop global optimization methods on simpler systems, one of the most useful in this direction is the Lennard-Jones potential, which has been used in the analysis of clusters in nanomaterials in the last times, see for instance [36] and the references therein. The Lennard-Jones potential is given by where ε represents the minimum value of the potential energy and σ is the minimum distance from the origin on the x-axis (when the potential is repulsive). That is, all molecules are attracting each other when they are close enough, the intensity of this attraction force decreases when the molecular distance increases. For simplicity we assume that ε = σ = 1. So we consider N -particles with equal mass m moving in the 2-dimensional Euclidean space. The forces between two particles are given by the Lennard-Jones potential. Let q i denotes the position of the i-th particle in an inertial coordinate system and let q = (q 1 , . . . , q N ) ∈ (R 2 ) N . Choosing the units of mass, length and time conveniently one can define the Lennard-Jones potential U : Ω → R as In this case is easy to verify that the following equality holds (∇U ) −1 (0) ∩ Ω = {(q 1 , q 2 ) ∈ Ω : q 1 = −q 2 and | q 1 − q 2 |= 1}, see Theorem 1 of [7]. In other words (∇U ) −1 (0) ∩ Ω = Γ(q 0 ), where q 0 = (0, 1/2, 0, −1/2) i.e. the set of critical points of the potential U consists of one orbit Γ(q 0 ). Moreover, it was proved in [7] that the orbit Γ(q 0 ) consists of minima of U and U (Γ(q 0 )) = −1. Since the action of SO (2) on Ω is free, the isotropy group SO(2) q 0 is trivial. It is easy to check that σ(∇ 2 U (q 0 )) = {0, 144} and mult(0) = 3, mult(144) = 1. We have just shown that all assumptions of Theorem 1.1 are fulfilled with j 0 = 1 and β 1 = 12. Applying this theorem we obtain the existence of non-stationary periodic solutions of system (1.1) in any neighborhood of the orbit SO(2)(q 0 ). Moreover, the minimal period of these solutions are close to π/6 ≈ 0, 5235. 4.2. The Schwarzschild 3-body problem. The potential in which we are interested comes from relativistic physics. It was introduced in 1916 by Schwarzschild [30] in order to give a solution to Einstein's equations for the gravitational field of an uncharged spherical non-rotating mass, which trough a classical formalism provides the Binet-type equations. Again, as in the Lennard-Jones problem the corresponding force (after a normalization of coordinates using cosmological background), is given by the minus gradient of the so called Schwarzschild potential, which has the simple form The Schwarzschild potential was tackled into the framework of dynamical systems and celestial mechanics by Mioc and collaborators, see for instance [24], [33] and the references therein. This new and original approach to study the dynamics of particles moving under this potential has been very useful in astrophysics for the analysis of theoretical black holes or the motion of a galaxy far enough that you can consider it as a single object. It has also been used in cosmology for the analysis of clusters of galaxies. The case A < 0 < B which concerns with the main results of this paper models the photogravitational field of the Sun, see [3] and the references therein for more details. For and U ∈ C 2 ((0, +∞) 3 , R) by U (r 12 , r 13 , r 23 ) = 1≦i<j≦3 U ij (r ij ). The Schwarzschild 3-body potential U : Ω → R is defined by U (p) = U (r 12 (q), r 13 (q), r 23 (q)) = 1≦i<j≦3 U ij (r ij (q)), (4.2.2) where r ij (q) = q i − q j . We observe that the Schwarzschild potential is smooth and SO(2)invariant. As for the Lennard-Jones problem, our goal here is to show the strength of our main Theorem 1.1 to show the existence of new families of periodic solutions of system (1.1) with the Schwarzschild potential U defined by formula (4.2.2). In the following lemma we describe local non-degenerate minima of the potentials U ij , for 1 ≦ i < j ≦ 3. iff r 12 (q 0 ) = − 3B 12 A 12 , r 13 (q 0 ) = − 3B 13 A 13 , r 23 (q 0 ) = − 3B 23 A 23 , which completes the proof.
2018-03-12T08:30:46.000Z
2017-11-10T00:00:00.000
{ "year": 2018, "sha1": "707dffc64d9930e46512b0e9e73bdb0c34af1a0b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.03773", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "707dffc64d9930e46512b0e9e73bdb0c34af1a0b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16163574
pes2o/s2orc
v3-fos-license
Prevalence and genetic diversity of rotavirus infection in children with acute gastroenteritis in a hospital setting, Nairobi Kenya in post vaccination era: a cross-sectional study Introduction Rotavirus is the leading cause of severe diarrhoea among infants and young children. Each year more than 611 000 children die from rotavirus gastroenteritis, and two million are hospitalized, worldwide. In Kenya, the impact of recent rotavirus vaccinations on morbidities has not been estimated. The study aimed at determining the prevalence and identity of rotavirus strains isolated from rotavirus-associated diarrhoea in vaccinated children presenting with acute gastroenteritis. Methods Two hundred and ninety eight specimen from children presented at Gertrude Childrens’ Hospital from January to June 2012 were tested by EIA (Enzyme-linked Immunosorbent Assay) for rotavirus antigens. Molecular characterization was conducted on rotavirus-positive specimens. Extracted viral RNA was separated by polyacrylamide gel electrophoresis (PAGE) and the specific rotavirus VP4 (P-types) and VP7 (G-types) determined. Results The prevalence rate of rotavirus was 31.5% (94/298). Of the rotavirus dsRNA, 57 (60.1%) gave visible RNA profiles, 38 (40.4%) assigned long electropherotypes while 19 (20.2%) were short electropherotypes. The strains among the vaccinated were G3P [4], G12P [6], G3P [6], G9P [4], G mixed G9/3P [4] and G1/3P [4]. Specifically, the G genotypes were G9/3 (5.3%), G9 (4.3%), G3 (4.3%), G12 (2.1%) and mixed G1/3 (1.1%). The P genotypes detected were P [4] (5.3%) and P [6] (5.3%). Conclusion The present study demonstrates diversity in circulating genotypes with emergence of genotypes G3, G9, G12 and mixed genotypes G9/3 and recommends that vaccines should be formulated with a broad range of strains to include G9 and G12. Introduction Diarrhea is a leading killer of children in Kenya, causing approximately 9 percent of deaths in children less than five years of age [1]. It is estimated that 27% of all under five diarrheal disease hospitalization in Kenya is caused by rotavirus infection [2]. Current global estimates of mortality attributed to rotavirus-associated disease is 611,000 in children less than 5 years old [3] and mortality figures for sub-Saharan Africa total 145,000 annually [4]. Studies in Nigeria [5], Tunisia [6] and Kenya [7] have demonstrated that close to 90% of all children are infected with rotavirus by 2 years of age. In Kenya, the peak age of contracting gastroenteritis in children is 6-24 months [8][9][10]. It is estimated that, in Kenya, 68 deaths, 132 hospitalizations, and 21,800 clinic visits per 100,000 children aged less than 5 years annually are attributable to rotavirus diarrhoea [11]. Rotavirus belongs to the family Reoviridae, non-enveloped with a triple-layered icosahedral protein capsid and a genome of 11 double stranded RNA segments [12]. Rotavirus is the most common etiological agent associated with severe gastroenteritis leading to dehydration and death in young infants' worldwide [3]. Rotaviruses are classified into groups, subgroups, serotypes and on the basis of electrophoretic migration of gene segments. The group and subgroup specificity are present on the inner capsid VP6 . Thus, currently only rotavirus groups A, B and C have been identified as human and animal pathogens, while groups D, E, F, and G have only been identified in animals and birds [13][14][15]. Majority of rotavirus diarhoea infections are caused by group A rotaviruses [16]. Group B rotavirus infection are uncommon but have recently been associated with outbreaks in China and India [17]. Group C rotavirus has been isolated in Kenyan children [18]. The genes encoding the outer capsid VP7 and VP4 form the basis of classification of group A rotaviruses into G and P genotypes, respectively [19]. There are twenty P genotypes with P [4], P [6], and P [8] most frequently associated with human infections. These viruses were generated by crossing the naturally attenuated bovine rotavirus strain WC3 with five unique human rotaviruses each contributing a G1, G2, G3, or G4 VP7 or P [8] VP4 gene to one of the vaccine viruses [21]. Rotarix® vaccine is a human G1P [8] virus RIX4414 which was derived by serial passage in cell culture of a virus recovered from the stool of an infected child [22]. Human rotavirus vaccine has been shown to reduce hospitalizations as a result of gastroenteritis from any cause by up to 42% [22]. The present study was aimed at estimating prevalence of rotavirus gastroenteritis and the genetic diversity of rotavirus among the vaccinated children at the Gertrude's Children Hospital, Nairobi, Kenya. Study setting and population This study was conducted at the Gertrude's Children's Hospital (GCH) between January and July 2012. The GCH is a leading Pediatric Hospital in Kenya which offers a variety of healthcare services to infants, school-age children, and teenagers. The Children's Hospital. All children aged less than 5 years presenting with gastroenteritis at the hospital's outpatient department, or during their first 48 hours of hospitalization and whose parents/guardians consented to the study, were included in this study. Those children who had developed gastroenteritis 48 hours after admission, or those with bloody stool with gastroenteritis and those whose parents/guardians did not give consent, were excluded from the study. Ethical clearance Ethical clearance for this study was obtained from the Scientific and Ethical Review Committees of the Kenya Medical Research Institute (KEMRI). Authorization to conduct the study was obtained from the Gertrude's Children Hospital (GCH). Page number not for citation purposes 3 Research instruments A structured questionnaire was used to collect the patients' clinical and demographic data, both from the hospital records and from the consenting parents/guardians. Study design and sample size determination This was a hospital based cross-sectional study. The required sample size was calculated using Fischer et al (1998) formula. The sample size was calculated based on a previous prevalence study of rotavirus gastroenteritis in an urban hospital in Nairobi, which had been established to be 28.4% [23]. To detect this with a precision of 5% and confidence level at 95%, at least 313 patients was required. Only 298 had adequate sample for analysis. Molecular analysis of ELISA-positive samples The ELISA-positive samples were analyzed by polyacrylamide gel electrophoresis (PAGE) to identify the presence of rotavirus double stranded RNA. The procedure was as stipulated in the WHO Manual of Rotavirus detection and characterization methods [24]. The rotavirus dsRNA was run on 10% polyacrylamide resolving gels using a large format gel electrophoresis system (Hoefer SE600) and 3% spacer gel was used to enhance resolution of the ds RNA segments. Thirty microlitres (30µl) of each sample was loaded onto gels and electrophoresis was conducted at 100 V for 16-20h at room temperature. The gels were stained using silver staining to group the rotavirus in electropherotypes as described in in the Laboratory Manual developed by the African Rotavirus Workshop in South Africa, 2002 [25]. Reverse transcriptase/polymerase chain reaction (RT-PCR) amplification was performed on the rotavirus ds RNA as described by the South and West African Regional Rotavirus Laboratories. The extracted ds RNA was re-suspended in 20µl of sterile iodized water and stored for use in PCR reactions. Briefly, the dsRNA was denatured by boiling at 94º C for 5 min followed by chilling in ice. The ds RNA was then reverse transcribed by incubating with reverse transcriptase and deoxynucleotides for 30 min at 42º C. The resultant cDNA was amplified in magnesiumdependent PCR. Briefly, for the G typing, a full-length 1062 gene segment 9, encoding for the VP7 glycoprotein was reverse transcribed and amplified by using primers sBeg9 [nucleotide (nt) 1-21, 5'-GGCTTTAAAAGAGAGAATTTC-3'] and End 9 (nt 1062-1036, 5' GGTCACATCATACAATTCTAATCTTAAG-3') followed by genotyping with cocktail of primers specific to six human serotypes G1-G4, G8 and G9 (aBT1, aCT2, aET3, aDT4, aAT8 and AFT9) and the consensus primer RVG 9 as described by [26,27] shown in Table 1. For VP4 genotyping a full length 876 gene segment 4, encoding for the VP4 was reverse transcribed and amplified by using outer primers Con3 and Con2 as previously described [28] shown in Table 2 followed by genotyping with cocktail of primers specific to the five human P genotype P [4], P[6], P [8], P [9], and P[10] (2T-1, 3T-1, 1T-1, 4T-1, 5T-1) and consensus primer as Gentsch describes. PCR fragments were analyzed on 2% TAE agarose gels at 80-90 volts with appropriate molecular weight marker to determine the genotype of rotavirus strain. PCR bands were compared with molecular weight markers. Data management and analysis Data coding and analysis was performed done using the SPSS Version 20.0 software. Pearson's chi-square was used to determine associations. Level of significance was fixed at 0.05 (p=0.05). Results Three hundred and thirty one (331) participants who met all the inclusion criteria were recruited. Of these, 298 stool specimens were examined in the final analysis. Ninety four (94) (31.5%) specimens were positive for rotavirus ( This onset of infection correlates well with the decline of maternally acquired antibodies that disappear around 5 months [32]. In the present study gender difference was not significant. This is in agreement with another study looking at risk factors in pediatric diarrhea [33]. Among the rotavirus positive children, fifty four (57.4%) who were not vaccinated had some dehydration as compared to twenty eight (30%) who were vaccinated. A similar percentage among vaccinated and unvaccinated children were treated by intravenous rehydration. In this study among vaccinated children, G9 genotype was most common genotype at 23.4%. This is in agreement with a recent studies in Kenya where prevalence of 13-15% have been reported [9,34,35]. G9 has been recognized as the most widespread of the emerging genotypes. It was first reported in the United States in the early 1980s [36]. Soon after its detection, it disappeared for more than a decade; then re-emerged in the mid-1990s and has been affecting patients to date. Currently, the genotype comprises 4.1% of global rotavirus infections, and accounts for as high as 70% of rotavirus infections as reported by some studies [37]. Genotype G3 was the second most common genotype demonstrated in this study among the vaccinated children at 4.2%. In Kenya, genotype G3 strains were predominant circulating genotypes in the years 1999 and 2000 [29] but further reports on occurrence of this genotype declined. Studies done elsewhere around the world have reported a high occurrence of G3 genotype. A study in Korea and Ghana found G3 to be the second predominant G genotype at 26.4% and 12.7%, respectively [38,39]. In Tunisia, a study in 2011 found that, G3 was the second most predominant at 25% [40]. Apart from being the predominant single G genotype, the Tunisian study established that it was also found in mixed infection with G9 and G1 at 17.5%. A study in India earlier mentioned detected 3% and 1.9% of diarrheal cases caused by G3 [41,42]. The genotype G12 was also demonstrated in this study in both the vaccinated and unvaccinated children in equal percentage (4.3%). A study by Kiulia in 2009 to 2011 in Eastern Kenya [34] showed that G12 was found in 3.1% of the samples. Notably, G12 genotypes have been consistently detected in various countries, including in South Africa. [43,44]. Since its first identification in the Philippines in 1990 [45], it has been reported worldwide [41]. In Malawi, G12 was the predominant circulating strain [46,47]. G12 was in association with P [6] genotype and was found in young African children with symptomatic rotavirus Page number not for citation purposes 5 infection. In the current study, all the four G12 were in found to be in association with P [6] forming G12P [6] combination. This finding concurs with other early studies that have reported the occurrence of this combination of G12P [6] and G12P [4], isolated for the first time in Bangladesh [48] Genotype G1 as single and combined with G3 was also detected in this study. Genotype G2 was not identified and has not been documented for over 8 years in Kenya as documented by Kiulia et al [34]. The current study did not document any case of G1 genotype among vaccinated children although numerous molecular epidemiological studies have indicated that G1 is the most common circulating G type around the world [37,49,50]. Interestingly none of the specimens processed demonstrated genotype P [8]. According to these results genotype G1 and P [8] seems to be controlled well given that both are included in both vaccines available. Conclusion In the current study, prevalence of rotavirus infection remains high in spite of vaccination. Notably, vaccine uptake is still very low.  Emerging strains of G12 and G9 were found among these children infected with Rotavirus. Competing interests The authors declare no competing interest. Table 1: Oligonucleotide primers for G serotyping as designed by Gouvea et al., 1990 andGault et al., 1999
2018-04-03T04:43:07.679Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "92e6049caef5e6f9a8ed70042cc3dd26ad1c6838", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11604/pamj.2017.26.38.10312", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92e6049caef5e6f9a8ed70042cc3dd26ad1c6838", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233215765
pes2o/s2orc
v3-fos-license
Systematic Understanding of Pathophysiological Mechanisms of Oxidative Stress-Related Conditions—Diabetes Mellitus, Cardiovascular Diseases, and Ischemia–Reperfusion Injury Reactive oxygen species (ROS) plays a role in intracellular signal transduction under physiological conditions while also playing an essential role in diseases such as hypertension, ischemic heart disease, and diabetes, as well as in the process of aging. The influence of ROS has some influence on the frequent occurrence of cardiovascular diseases (CVD) in diabetic patients. In this review, we considered the pathophysiological relationship between diabetes and CVD from the perspective of ROS. In addition, considering organ damage due to ROS elevation during ischemia–reperfusion, we discussed heart and lung injuries. Furthermore, we have focused on the transient receptor potential (TRP) channels and L-type calcium channels as molecular targets for ROS in ROS-induced tissue damages and have discussed about the pathophysiological mechanism of the injury. INTRODUCTION At first glance, diabetes, which causes abnormal blood glucose control, and ischemia-reperfusion injury (IRI) of the heart, which causes myocardial infarction, seem to have nothing in common. However, both these diseases are consistent in that they cause inflammation with the release of cytokines and the responses of immune cells. These reactions are triggered by the oxidative stress (OS) that occurs in the body. Oxidative stress is defined as an imbalance between oxidants and antioxidants in favor of the oxidants (1). Reactive oxygen species (ROS) including hydrogen peroxide (H 2 O 2 ) and superoxide ( . O − 2 ) that are generated in the cells cause OS when they become excessive. Oxidative stress causes diseases such as diabetes (2), IRI (3), cancer (4), and Alzheimer's disease (5), and, notably, this condition is affected by diet and obesity (6). While the organ heart has drawn much attention in the context of ischemic heart diseases, which is the leading cause of death among humans (7), IRI also occurs in several other organs such as the lung (8). In addition, transplantation of organs, such as lungs and kidneys, can result in IRI due to blood reperfusion in ischemic-isolated organs (9). While having their own specific mechanisms for the development of diseases, the pathological conditions of diabetes and IRI also share a common molecular basis in a series of intracellular signal transduction mechanisms originating from OS, as discussed in the present review. In addition to diabetes, extending the pathophysiology of IRI from the perspective of OS is meaningful to understand the diseases and development of preventive measures and treatments involved. PATHOPHYSIOLOGICAL RELATIONSHIP BETWEEN DIABETES AND CARDIOVASCULAR DISEASES FROM THE PERSPECTIVE OF ROS As the life-expectancy of diabetic patients has increased significantly, the cardiovascular complications of diabetes have become prominent. When compared with people without diabetes, people with type 2 diabetes (T2DM) are at an increased risk of cardiovascular diseases (CVD) (10). The increased production of ROS in the diabetic heart is an important factor in the occurrence and development of diabetic cardiomyopathy (11). Reactive oxygen species can induce the inactivation of the signaling mechanism between the insulin receptor and the glucose transport system, which can lead to insulin resistance (12). Meanwhile, diabetes is a producer of OS, which can lead to atherosclerosis (13,14). We have explored the mechanisms by which T2DM triggers OS and increases the risk of CVD from the prospect of obesity, hyperglycemia, and intracellular calcium. Obesity Plays an Important Role in Heart Disease of Diabetic Patients A recent study reported presence of differences in the factors causing OS in the hearts of obese and non-obese diabetic mice. In addition, the decreased expression of antioxidant molecules in the hearts of non-obese diabetic mice was reported to act as an important factor that leads to the development of heart diseases (15). In this study, Li et al. created two groups of T2DM mouse models: obese and non-obese groups. They found that obese T2DM mice demonstrated more severe heart remodeling and earlier contractile dysfunction than non-obese T2DM mice. In addition, obese T2DM mice revealed severe and persistent myocardial lipotoxicity, which was manifested by increased free fatty acids (FFA) uptake. Excessive FFA uptake activates the peroxisome proliferator-activated receptor alpha (PPARα) pathway and phosphorylate glycogen synthase kinase 3 beta (GSK-3β), while inhibiting glucose transporter 4 (GLUT4) and fatty triglyceride lipase (ATGL). Among the tissue damage caused by lipotoxicity, OS is the main factor (16). Under the effect of lipotoxicity, the tissues absorb a large amount of FFA, leading to excessive oxidation of FFA, a sharp increase in the amount of oxygen consumption, and excessive ROS production (17-20). In addition, excessive FFA and resultant oxidation lead to ceramide synthesis, which in turn leads to increased cardiomyocyte apoptosis through the mitochondrial pathway (20). Another interesting mechanism by which obesity affects the development of atherosclerosis through OS is Na/K-ATPase. According to Krithika Srikanthan et al., activation of the Na/K-ATPase signal cascade exacerbates obesity, diabetes, dyslipidemia, and atherosclerosis, and these conditions are all related to the imbalance of OS (21). Na/K-ATPase is a scaffold and signaling protein, and is also involved in many clinical conditions, including CVD and chronic kidney disease (22, 23). Fat accumulation in humans and mice is related to systemic OS (24). The white adipose tissue of obese mice has a trend of increased expression of NADPH oxidase (NOX) and decreased expression of antioxidant enzymes (25, 26). In cultured adipocytes, the production of ROS was significantly increased during the differentiation of 3T3-L1 cells into adipocytes, indicating that the production of ROS increased simultaneously with the accumulation of fat in adipocytes (27). Besides, the increase in free fatty acid levels can induce ROS production through the activation of NOX (28). Furthermore, diet-induced OS can activate the Na/K-ATPase/Src/ROS amplification loop, leading to the occurrence and development of dyslipidemia and atherosclerosis (21). The nuclear factor erythroid 2-related factor 2 (NRF2) pathway is closely related to antioxidant effects and is activated at the onset of OS (29). Li et al. reported that the expression level of NRF2 and its target genes heme oxygenase 1 (HO-1) and NAD(P)H quinone dehydrogenase 1 (NQO1) increased significantly in the heart of obese T2DM mice, but they decreased in the hearts of non-obese T2DM mice (15). This result implies that myocardial lipotoxicity and antioxidant pathway activation occur in obese T2DM patients. This finding may provide a new guidance for the prevention and clinical treatment of diabetic heart diseases. Relationship Between Increased ROS Caused by Hyperglycemia and Cardiovascular Dysfunction Hyperglycemia (high levels of blood glucose) leads to increased production of ROS, which ultimately leads to vascular dysfunction (30). Meanwhile, OS from hyperglycemia leads to insufficient glucose uptake by muscles and fat cells. Furthermore, OS from hyperglycemia may promote β-cell dysfunction and reduce insulin secretion by β cells (13, 31). This event also leads to further aggravation of hyperglycemia. As a result, hyperglycemia and OS interact. It is therefore important to understand how to reduce OS so as to reduce hyperglycemia. Another question that needs resolution is how does high blood sugar level trigger OS and lead to cardiovascular dysfunction. Under a hyperglycemic condition, ROS accumulates, damages DNA and proteins, and injures cardiomyocytes. The increase in ROS production caused by hyperglycemia occurs through the following ways: activation of the protein kinase C (PKC) pathway via diacylglycerol (DAG), increased hexosamine pathway flux, increased production of advanced glycation-end product, and increased flux in the polyol pathway (32, 33). During the ROS production in the polyol pathway, when aldose reductase reduces glucose to sorbitol, excess glucose enters the polyol pathway (Figure 1) (34). This reaction oxidizes NADPH to NADP + , consuming NADPH (34). As NADPH is essential for antioxidant FIGURE 1 | Development of atherosclerosis via ROS production in the polyol pathway in the condition of hyperglycemia. In the process of the reduction of glucose to sorbitol by aldose reductase, NADPH is oxidized to NADP + , consuming NADPH. As NADPH is essential for regeneration of antioxidant glutathione (GSH), the reaction of reducing H 2 O 2 to H 2 O is suppressed. The accumulation of H 2 O 2 causes inflammation, resulting in the development of atherosclerosis. GSSG, glutathione disulfide. regeneration, the decrease in the amount of NAPDH leads to the facilitation of OS. Simultaneously, the accumulation of ROS caused by hyperglycemia triggers insulin resistance (13, 35, 36). Insulin resistance occurs when the cells in the muscles, fat, and liver do not respond appropriately to insulin and cannot uptake glucose from the blood for deriving energy (37). In response, the pancreas produce more insulin (37). Interestingly, insulin resistance is a component of T2DM, high blood pressure, and dyslipidemia; these characteristics together constitute a major risk of CVD (38). Past studies have reported that mitochondrial OS is related to insulin resistance (39). Therefore, under high blood sugar level conditions, the mitochondria are active and produce more ROS (40). Elevated ROS levels can induce mitochondrial division, which in turn affects the insulin-PI3K-AKT pathway and GLUT4 (12). Glucose transporter 4 is the main glucose transporter (41) in the skeletal muscles and adipose tissue. The cells respond to insulin by increasing the expression of GLUT4 in the plasma membrane, thereby increasing the cellular uptake of blood glucose. When the glucose level is high, the body produces insulin, which then activates the PI3K/AKT pathway (42). Mitochondrial fission is directly related to insulin resistance of the skeletal muscles (43). Past studies have also demonstrated that restricting mitochondrial overactivation can prevent insulin resistance (44). In addition, insulin resistance caused by mitochondrial dysfunction may lead to metabolic and cardiovascular abnormalities, thereby increasing the incidence of CVD (38, 45). In summary, OS caused by hyperglycemia plays an important role in cardiovascular dysfunction and both the conditions interact with and influence each other. Effect of OS on Calcium Handling in the Heart Under Diabetic Conditions Redox regulation of calcium-handling proteins directly affects cardiac contraction by changing intracellular calcium concentration (46). As discussed earlier, hyperglycemia in the cells can lead to excessive ROS production. The increase in the ROS level can inhibit autonomic ganglion synaptic transmission by oxidizing the α3 subunit of nicotinic acetylcholine receptor, which may in turn result in fatal arrhythmia (47). At the same time, ROS leads to sudden death of a diabetic patient after myocardial infarction by increasing post-translational protein modification, which leads to the downregulation of Ca 2+ -ATPase transcription in the sarcoplasmic reticulum. Ventricular contraction and relaxation are mainly controlled by the release and uptake of Ca 2+ by the sarcoplasmic reticulum Ca 2+ -ATPase 2 (SERCA2) pump (48, 49). In hypertrophic and failing myocardium, the level of SERCA2 protein and its ability to absorb Ca 2+ are inhibited. Reactive oxygen species can oxidize and directly enhance CaMKII activity, which in turn phosphorylates and activates several Ca 2+ -handling proteins such as the cardiac ryanodine receptor RyR2 or cardiac SERCA (50). Protein O-linked-N-acetylglucosaminylation (O-GlcNAcylation) plays important roles in calcium handling under diabetic conditions (Figure 2). For example, hyperglycemia increases the O-GlcNAc modification of calcium/calmodulindependent protein kinase IIδ (CaMKIIδ), which in turn leads to the autonomous activation of CaMKII (51, 52). Furthermore, the hyperglycemia-induced O-GlcNAcylation of CaMKII causes ROS production by NOX2 (53). Autonomous activation of CaMKII can lead to decreased cardiac contractility and potential fatal arrhythmias, such as ventricular premature beats and delayed depolarization. In fact, delayed depolarization is related to long QT interval arrhythmia (54). On the other hand, in the chronic hyperglycemia condition in diabetes, O-GlcNAc transferase reduces the transcription of SERCA2, which results in decreased calcium reuptake and impaired relaxation (55). The overexpression of GlcNAcase or the inhibition of GlcNAc modification increases the expression of SERCA2a, the ablated sarcoplasmic reticulum Ca 2+ leakage, improved cardiac contractility, and reduced arrhythmia events (56). In summary, calcium plays an important role in cardiac dysfunction caused by ROS derived under the condition of hyperglycemia. IRI IN TERMS OF OXIDATIVE DAMAGE ischemia-reperfusion injury is a type of tissue damage that occurs when the blood flows back to the tissue after a period of ischemia or under the lack of oxygen. IRI is often detected in cases of organ transplants, major organ resections, and shock. The main organs in which IRI occurs are the heart, lung, brain, liver, kidney, and intestine (57-62). This finding contributes to morbidity and mortality occurring in a variety of pathologies, such as myocardial infarction and stroke caused by coronary atherosclerosis (63). ischemia-reperfusion is often associated with microvascular injury, especially due to increased permeability of the capillaries and arterioles, which lead to increased interstitial diffusion and fluid filtration across the tissues. After ischemia, the re-entry of blood into the tissue induces the release of large amounts of oxygen free radicals. These free radicals trigger enzymatic reactions, leading to oxidative damage to the cell membranes as well as the production of toxic metabolites and cell injury involving DNA, proteins, and lipids (63, 64). Interestingly, the common factor between diabetes, as discussed in the previous section, and IRI is that OS affects the deterioration of the pathological processes, including inflammation. During IRI, the damaged tissues produce excessive amounts of ROS, causing the release of proinflammatory cytokines and apoptosis (64-66). After myocardial ischemia, cardiac surgery, cardiogenic shock, or circulatory arrest, myocardial IRI can lead to adverse cardiac events. Although it is necessary to restore the blood flow to nourish the cells, reperfusion is known for its harmful effects because of OS and the subsequent development of intense inflammation and immune responses (67)(68)(69)(70)(71)(72)(73)(74)(75). The following subsections discuss the role of the three molecules involved in the development of IRI. TLR4 Innate immune response to invading pathogens, which is derived from the toll receptors, is shared extensively among insects and vertebrates (76). Toll-like receptor 4 (TLR4) binds to various types of ligands such as lipopolysaccharides (LPS), low-density lipoproteins, and heat-shock proteins (77,78). Among the tolllike receptors (TLRs) consisting of 11 subtypes in humans, TLR2 and TLR4, predominantly TLR4, are involved in the development of IRI (79). The TLR4-signaling pathway is an important inflammatory cascade in IRI with essential functions in the adaptive immune system (80,81). Toll-like receptor 4 responds to endogenous molecules during the sterile inflammatory processes such as IRI (82) and is considered as the key regulator in several ischemia-reperfusion models. As discussed earlier, OS is critically involved in the pathogenesis of IRI. In fact, ROS facilitates TLR4 trafficking to the plasma membrane, thereby promoting the TLR4 activity (83,84). This event implies that the pathogenesis of IRI is at least partly attributable to the effect of ROS on the TLR4 activation. Furthermore, Pahwa et al. postulate that ROS act as a potential activator of TLRs and that hyperglycemia-induced OS activates TLRs, subsequently inducing inflammatory responses in diabetes (85). The activations of TLR2, TLR3, and TLR4 increases oxidation levels of lipids and proteins (86). In addition to the TLR4 activation by ROS mentioned earlier, the relationship between ROS and TLR4 includes ROS production through the TLR4 activation. For example, TLR4 activation induced by LPS facilitate intracellular ROS production via NOX-4 (87). In TLR4deficient mice, the ROS generation is reduced (88). The TLR4/NF-κB pathway is involved in the development of myocardial IRI. TLR4, initially detected in monocytes, is also expressed in other tissues, including the heart (76). Moreover, TLR4 is strongly expressed in injured myocardium (96). MAPKs, such as p38 and c-Jun NH2-terminal kinase (JNK), are activated during myocardial IRI (97), which in turn induces an acute inflammatory reaction. According to Lee et al., ROS produced by NOX-2/4 causes MAPK activation (98). TLR4-deficient mice have significantly less myocardial injury, as characterized by the reduction in the myocardial infarction area, decrease in the JNK and NF-κB activation, as well as reduction in the mRNA expression of inflammatory cytokines, such as IL-1β, IL-6, and MCP-1 (99). The TLR4/NF-κB pathway is also involved in the development of IRI in other organs. The deletion of TLR4 or pharmacological antagonists reduces the severity of IRI in cardiac, hepatic, renal, and pulmonary models (99)(100)(101)(102)(103)(104)(105)(106)(107)(108). In case of the lung IRI, the levels of phosphorylated JNK and NF-κB are diminished in TLR4-deficient mice (106,108). Two pathways that possibly get activated during the lung IRI are apoptosis, induced by the activation of a transcriptional program controlled by NF-κB and acute inflammation promoted by the activation of resident alveolar macrophages and the expression of several proinflammatory cytokines and chemokines, such as TNF-α, IL-1β, IL-8, and macrophage inflammatory protein 2 (MIP-2) (109). The markers of lung injury, including permeability index, myeloperoxidase content, and bronchoalveolar lavage inflammatory cell counts were all decreased with TLR4 knockdown. The TLR4 knockdown in alveolar macrophages resulted in almost complete weakening of the lung IRI. The protective effect of TLR4 knockdown appears to be partly mediated by the significant reduction in pre-transcriptional signaling through MAPKs phosphorylation and possibly due to the nuclear translocation of transcription factors, such as NF-κB and activator protein-1 (107,110). DPP4/CD26 Dipeptidyl peptidase-4 (DPP4), also known as CD26, is a cellsurface protease offers a wide range of biological functions. As a serine-type protease, DPP4 cleaves dipeptides from the Nterminus, with proline residues in the penultimate position (111,112). Clinical and experimental study over the past 30 years has clearly demonstrated that the DPP4/CD26 pathway is involved in a variety of physiological processes and immune system diseases (113). In addition, DPP4/CD26 transmembrane glycoproteins are expressed not only by various cells of the immune system but also by the epithelial and systemic vascular endothelial cells, by the endothelial cells of venules and capillaries, by the cells of the heart, kidney, lung, pancreas, spleen, and small intestine, by the vascular smooth muscle cells, and by monocytes and hepatocytes; moreover, it is soluble in the plasma (111,114,115). In addition to its involvement in the development of diabetes, accumulating evidence indicates the role of DPP4 in IRI (122). The lung is the second-highest expressed organ of DDP4 in rats (134). Dipeptidyl peptidase-4 can directly affect the dynamics of lung inflammation and may itself act as a proinflammatory signaling molecule (135,136). In the lung, the capillaries may act as the main source of DPP4 activity, while the submucosal serous gland and alveolar cells also express DPP4 (111). Similar to the case of myocardial IRI, GLP-1 is believed to exert a protective effect also in the lung IRI by suppressing the production of OS (137). HO-1 The presence of excessive free heme facilitates ROS formation, thereby leading to abnormal endothelial cell function, as observed in systemic hypertension, diabetes, and IRI (19384082). HO is important to reduce the production of ROS (138). Specifically, HO possesses the ability to degrade heme and produce carbon monoxide (CO), a heme ligand, and biliverdin, an antioxidant (139). Human HO exists in three isoforms, HO-1, HO-2, and HO-3. Among these, HO-1 is involved in exerting protective effect against IRI. The expression of HO-1 is modulated by the transcription factor NRF2, as discussed in Section Obesity plays an important role in heart disease of diabetic patients. NRF2, which translocated to the nucleus under OS, activates antioxidant response element and increases the transcription of antioxidant genes, including HO-1 (140). The HO-1 system includes four main functions: (1) antioxidant function; (2) maintenance of microcirculation; (3) regulation of cell cycle; and (4) antiinflammatory function (141). Overexpression of HO-1 exerts a potent cellular protective effect in rat heart ischemia-reperfusion models. HO-1 can reduce IRI due to the enhanced antioxidant and anti-apoptotic activities (142,143). Moreover, HO-1 possesses antiapoptotic outcomes. These effects get mediated through the p38 MAPK-signaling transduction pathway activated by CO (144). In addition, CO-exposed animals, at least partially, demonstrate a significant reduction in hyperoxia-induced lung apoptosis through the anti-inflammatory MKK3/P38 MAPK pathway (144). Three major MAPKs in cardiomyocytes are affected by the ischemiareperfusion, and the ERK pathway may be critical for cell survival by protecting the cells from programmed cell death caused by stress-induced activation of p38 and JNK (145). EFFECTS OF ROS ON THE ION CHANNELS AND THEIR IMPLICATION WITH PATHOPHYSIOLOGY The transient receptor potential (TRP) melastatin (TRPM) subfamily belongs to the TRP cation channel superfamily, and most of its members either have calcium ion permeability or are calcium ion activating proteins (146,147). Changes in the concentration of Ca 2+ /Mg 2+ in cells or changes in the cell membrane potential and electrical activity can affect various biological processes, including the cellular OS level (148), endothelial cell permeability (149), and cell death (150). Therefore, in the past 10 years, the members of this family have attracted more and more interest and attention to CVD (151,152), T2DM (153), and inflammation (154). The activity of some members of the TRPM subfamily is regulated by OS (155). Therefore, the emergence of OS-regulated ion channels in an oxidative environment creates favorable conditions for disease development. With the increase of OS, the TRPM4 channel functions abnormally, which promotes the onset and development of the disease. To verify this point, it became necessary to create an ischemic and hypoxic cellular environment. Presently, cobalt chloride (CoCl 2 ) (164) and H 2 O 2 (165,166), in a laboratory setting, are widely used to establish OS models and fully characterized chemical agents. CoCl 2 can be used to establish a simple in vitro model of hypoxic/ischemic disease in the laboratory, but up to now, there are few studies on TRPM4 channel induced by CoCl 2 . The possible reason is that CoCl 2 can induce the production of ROSs, but also affect the expression of some genes, such as HIF-1α, p53, p21, and PCNA (167)(168)(169). CoCl 2 may also affect the remodeling of CMs in hypoxic/ischemic area by activating PI3K/Akt and MAPK pathways (170), and CoCl 2 -induced apoptosis may be related to mitochondria-mediated apoptosis pathway (171). Hydrogen peroxide increases the activity of TRPM4 (172), while ATP and ADP inhibit its activity (173). When ATP production in hypoxia is insufficient, cardiomyocytes activates the K ATP channels (174) and cause cell hyperpolarization, thereby preventing arrhythmia. However, this process may be affected by electrical disturbances induced by TRPM4 protein, because the channel is sensitive to Ca 2+ and ATP (175,176). Meanwhile, our previous research results (166) demonstrated that TRPM4 is involved in the death of cardiomyocytes mediated by H 2 O 2 . At higher concentrations, H 2 O 2 increases cell death in a concentration-dependent manner, while 9-phenanthrol (9-Phe) can partially reverse H 2 O 2 -induced cell death. The reversal effect is probably the result of 9-Phe's direct effect on the TRPM4 channel (166,177,178). TRPM2 Unlike TRPM4, TRPM2 is a cation channel permeable to Ca 2+ (179). TRPM2 also plays an important role in cell proliferation and survival (180). It is widely distributed and sensitive to OS (181). However, at present, there is little information available on the physiological and pathophysiological functions of TRPM2 in the heart. Early studies of the TRPM2 channel function support the observation that TRPM2 activation induces cell death by continuously increasing the [Ca 2+ ] i (182)(183)(184). Mitochondrial integrity is critical to the survival and function of cardiomyocytes and is essential for maintaining the highenergy requirements of cardiomyocytes. Ca 2+ overload can lead to mitochondrial permeability transition (MPT), but Ca 2+ overload is the result of bioenergy failure after MPT occurs following myocardial ischemia-reperfusion (185). This result can be corroborated from the study of Davidson et al. (186). In Langendorff-perfused mouse hearts, MitoQ, a mitochondrialtargeted scavenger of ROS, could significantly reduce the Ca 2+ wave-related mPTP opening. The mitochondria can thus benefit from the calcium influx mediated by TRPM2 to reduce the mitochondrial ROS production (179). The heart consumes an equivalent of 6 kg of ATP per day, most of which is produced through mitochondrial oxidative phosphorylation (187). Myocardial ischemia consumes a large amount of ATP and produces a large amount of ROS; this process reduces mitochondrial biogenesis and mitochondrial dysfunction, ultimately leading to cell death (39, 188). However, the results of a study showed (189) that TRPM2 can rescue the ATP levels in the cells. During OS, TRPM2 maintains cell survival after OS by regulating the antioxidant pathway and cofactors that are regulated by NRF2. Moreover, the TRPM2 channels can protect cardiomyocytes from IRI (181), which may be due to the Ca 2+ flux mediated by TRPM2 that enhances the activity of calcineurin and the stability of hypoxia-inducible factor (HIF) (190). In immune cells, the NOX activity depends on membrane depolarization (191) when the TRPM2 channel is activated and it inhibits the production of ROS. TRPM2-mediated calcium influx can reduce the production of ROS through the depolarization of the plasma membrane of immune cells and the negative feedback regulation of ROS production (192). This event contributes to cell functions such as cytokine production, insulin release, cell motility, and cell death (193). L-Type Voltage-Gated Calcium Channel Pulmonary circulation is characterized by low resistance and low pressure, and the mean pulmonary arterial pressure (mPAP) is <20 mmHg (194). Hypoxic pulmonary vasoconstriction (HPV) is a physiological response of the arterioles. However, there is usually no obvious effect on the pulmonary arterial pressure during HPV on limiting the hypoxia area (195). Persistent hypoxia induces pulmonary vasoconstriction and vascular remodeling mediated by the contraction and proliferation of pulmonary artery smooth muscle cells (PASMC), which eventually led to pulmonary hypertension (PH) (196). Pulmonary hypertension associated with hypoxia belongs to the third group in the classification of PH (194). Although there is no unified view yet on this association, hypoxia could increase the level of ROS in PASMC (197)(198)(199)(200)(201)(202)(203)(204)(205). Excessive ROS is considered to be the main factor of arterial remodeling in PH induced by chronic hypoxia (CH) (206,207). The specific mechanism of ROS promoting PH has not been clarified yet, but it is evident that ROS plays an important role in CH-induced PH vasoconstriction. Abnormal voltagedependent Ca 2+ influx is considered to be related to the pathogenesis of hypoxic PH (HPH) (208). In PASMC, cytosolic Ca 2+ concentration ([Ca 2+ ] cyt ) is regulated by two pathways: voltage-dependent Ca 2+ influx and voltage-independent Ca 2+ influx. The influx of Ca 2+ through L-type voltage-gated calcium channels (VGCC) is an important [Ca 2+ ] cyt regulatory pathway in HPH. Nifedipine and verapamil, which are L-type VGCC antagonists, can prevent HPV, inhibit PASMC proliferation, and alleviate HPH (208)(209)(210)(211). L-type VGCC belongs to one of the calcium ion channels, which is a polymer transmembrane protein complex composed of five subunits of α1, α2, δ, β, and γ. Here α1 is the main functional subunit, while the others are auxiliary subunits. There are four subtypes of α1: α1S (Ca v 1.1), α1C (Ca v 1.2), α1D (Ca v 1.3), and α1F (Ca v 1.4) (212). Ca v 1.2 was upregulated, while L-type VGCC could functionally enhance pulmonary vasoconstriction associated with Ca 2+ influx in PASMCs after CH exposure (213). The existing pharmacological data indicates that L-type VGCC plays an important role in the increase of [Ca 2+ ] i in PASMC induced by acute O 2 tension (214)(215)(216)(217)(218). Experiments are hence necessary to investigate the effects of specific inhibitors (such as mibefradil) of T-type VGCC to determine their role in maintaining [Ca 2+ ] i during hypoxia, although mounting evidence have demonstrated that the application of H 2 O 2 (219-221) and oxidized glutathione (GSSG) (222,223) resulted in Ca 2+ influx through L-type VGCC. In addition, the possibility of channel opening and inward Ca 2+ currents are increased by Ca v 1.2 subunit of L-type VGCC, which was glutathionylated by H 2 O 2 and GSSG in subsequent studies (222,223). Moreover, Ca 2+ signaling contributed to the contraction of PA (224). Furthermore, L-type VGCC has been reported to be sensitive to plasma membrane depolarization (225). Interestingly, vasoconstrictor endothelin-1 (ET-1) can stimulate L-type VGCC-mediated increase of Ca 2+ in PASMCs of CH Wistar rats through the PKC and Rho kinase-dependent ways (226,227). This situation is not difficult to understand, because both PKC (228) and Rho kinase (229) can be activated by oxidation to regulate this process. An indirect evidence of this finding is that ET-1 could increase the production of ROS in PASMCs (230)(231)(232). This hypothesis has not been tested in pulmonary circulation, but the activation of L-type VGCC induced by ET-1 in isolated cardiomyocytes is now known to be mediated by.O − 2 (233). CONCLUSION Oxidative stress is based on the balance between oxidant and antioxidant activities derived from numerous molecules and pathways. In this review, we discussed ROS production in hyperglycemia under diabetic conditions, and, interestingly, the effect of obesity on it. Moreover, OS affects calcium handling via SERCA2 and CaMKII, thereby exacerbating cardiac functions in diabetes. In this way, OS is involved in the effects of diabetes on CVD. Moreover, a common mechanism is involved in the pathology of diabetes and IRI. For example, the OS-induced inflammation basically shares the common mechanism of TLR4/NF-κB and TLR4/MAPK pathways in diabetes and IRI. In addition, the DPP4/GLP-1 and NRF2/HO-1 systems are involved in ROS scavenging in diabetes and IRI. We also discussed the effect of OS on the activities of ion channels, such as TRPM2, TRPM4, and L-type VGCC, and their implications with diseases, including IRI. Further understanding of these mechanisms is expected to promote the development of new strategies for the prevention and cure of these formidable diseases.
2021-04-13T13:18:07.598Z
2021-04-13T00:00:00.000
{ "year": 2021, "sha1": "72bbc729386225bda3bd7850599bfbfac079da7e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.649785/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72bbc729386225bda3bd7850599bfbfac079da7e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253483159
pes2o/s2orc
v3-fos-license
Formulation of the Polysaccharide FucoPol into Novel Emulsified Creams with Improved Physicochemical Properties Driven by the customers’ growing awareness of environmental issues, the production of topical formulations based on sustainable ingredients is receiving widespread attention from researchers and the industry. Although numerous sustainable ingredients (natural, organic, or green chemistry-derived compounds) have been investigated, there is a lack of comparative studies between conventional ingredients and sustainable alternatives. In this study, olive oil (30 wt.%) and α-tocopherol (2.5 wt.%) containing oil-in-water (O/W) emulsions stabilized with the bacterial fucose-rich polysaccharide FucoPol were formulated envisaging their validation as cosmetic creams. After formula composition design by Response Surface Methodology (RSM), the optimized FucoPol-based emulsion was prepared with 1.5 wt.% FucoPol, 1.5 wt.% cetyl alcohol, and 3.0 wt.% glycerin. The resulting emulsions had an apparent viscosity of 8.72 Pa.s (measured at a shear rate 2.3 s−1) and droplet size and zeta potential values of 6.12 µm and −97.9 mV, respectively, which are within the values reported for cosmetic emulsified formulations. The optimized formulation displayed the desired criterium of a thin emulsion system, possessing the physicochemical properties and the stability comparable to those of commercially available products used in cosmeceutical applications. Introduction The global market demand for products based on innovative ingredients and technologies has compelled the cosmetic industry to rapidly increase the research and development of natural, organic, and eco-friendly formulations [1][2][3]. One of the most notorious examples of this growing interest is the incremental utilization of natural polysaccharides in cosmetic formulations. These biopolymers are composed of carbohydrates with several hydroxyl groups that, given their chemical composition, strongly interact with water [1,2]. There are many functional polysaccharides, able to act as film formers, gelling agents, thickeners, suspending agents, conditioners, and emulsifiers. These features derive from the biopolymers' physical and chemical properties and are critical for polysaccharide-based cosmetics formulation technologies [1]. Examples of natural polysaccharides with consolidated utilization in commercial skin-care products include xanthan gum and cellulose, which are used as thickeners and stabilizing agents, and hyaluronic acid, which is applied as a moisturizing and bioactive ingredient [1,2,[4][5][6]. Besides polysaccharides, many proteins have also been demonstrated as good emulsifiers for example in food products [7][8][9]. The emulsifying ability of proteins derives from their amphiphilic character conferred by the presence of hydrophobic and hydrophilic amino acids in their structures, which allow proteins' adsorption at oil/water interfaces, thus stabilizing the emulsions [10]. However, proteins have low surface activity than most conventional emulsifiers and the final products' O/W Emulsions' Optimization With the objective of defining the composition resulting in emulsions with high EI after 24 h (E24), concomitant with high apparent viscosity, different FucoPol, cetyl alcohol, and glycerin concentrations were tested. High E24 values (≥95%) were obtained in most runs, except in runs 7, 8, 9, and 11, which were devoid of the FucoPol, irrespective of the cetyl alcohol and glycerin content that varied from 0.0-1.5 wt.%, and 1.0-3.0 wt.%, respectively. Higher FucoPol concentrations also conferred higher apparent viscosity to the emulsions, regardless of cetyl alcohol and glycerin concentrations. The emulsions presented a yellowish-white color, olive odor, and creamy/smooth texture, showing physical stability (as shown by the centrifugation test) for apparent viscosity values ≥90 Pa.s. Table 1 shows that the maximum apparent viscosity obtained values were 249 Pa.s (Run 4) and 244 Pa.s (Run 6), both containing a FucoPol concentration of 1.5 wt.%. ANOVA was used to define the working ranges for each variable resulting in the highest E24 and η values. The coefficients of multiple determination (R 2 ) values of E24 and η were 0.974 and 0.995, respectively. For η, the R 2 was in reasonable agreement with the adjusted R 2 (0.989) and the predicted R 2 (0.967). The adjusted coefficient of determination indicated that 98.9% of the variability in the response could be explained by the model. The quadratic model was significant (f -value = 169.92 and p-value < 0.0001), being supported by an insignificant lack-of-fit (p = 0.778) toward the response (η), meaning that the error predicted by the model was above the error of the replicas [24]. There is only a 0.01% chance for a noise-derived "Model F-Value", which implies an adequate variation of the data around its mathematical mean; in addition, the estimated factor effects are real [14,25,26]. The statistical analysis indicates that the proposed model was adequate to predict the ingredients' concentrations to obtain emulsions with higher viscosities. The same did not happen for E24, where the R 2 , adjusted R 2 , and predicted R 2 were 0.974, 0.941, and 0.718, correspondingly. The difference between the predicted R 2 and the adjusted R 2 was higher than 0.2, which may indicate a large block effect or a possible problem with the model and/or data. The RSM results ( Figure S1) suggest that cetyl alcohol and glycerin did not influence the E24 and η values. Moreover, FucoPol at 1.5 wt.% led to emulsions with η values above 206 Pa.s. Increasing the concentration of FucoPol resulted in more viscous emulsions, and more stability against coalescence, avoiding emulsions' phase separation [14,27]. This is due to FucoPol's ability to avoid droplets creaming and promote an increased viscosity of the formulation, as reported before [14]. Based on these results, the ingredient concentrations that promoted higher η and E24 values were defined as: 1.5 wt.% FucoPol, 1.5 wt.% cetyl alcohol and 3.0 wt.% glycerin. Physicochemical Characterization The freshly prepared formulations ( Figure 1a) presented a yellowish-white color (except formulation A which was completely white) and had a slight olive oil odor. Macroscopic observation, throughout the 60-day storage period ( Figure S2), showed the formulations maintained their homogeneous texture, with no visible oil/water phase separation, as confirmed by their EI that was kept unchanged (100%) (Figure 1b,c). The formulations' physical stability (Figures 1d,e and S3) was evaluated by the centrifugation test to check for the presence of phase separation [26], sedimentation, and/or precipitation [28]. Formulations A, E, and F remained stable for 60 days, showing no phase separation, while formulations B, C, and D showed phase separation at 30 days of storage. separation, as confirmed by their EI that was kept unchanged (100%) (Figure 1b,c). formulations' physical stability (Figure 1d,e and Figure S3) was evaluated by the trifugation test to check for the presence of phase separation [26], sedimentation, an precipitation [28]. Formulations A, E, and F remained stable for 60 days, showing phase separation, while formulations B, C, and D showed phase separation at 30 day storage. As presented in Figure 2a, formulations B and C were slightly acidic with pH values in the range of 6.3-6.9 throughout the storage period (60 days), whilst formulations A, D, E, and F had pH values above 7. Skin care products must not affect the acid-base balance of the skin's individual layers nor disrupt the stratum corneum barrier function [29]. Given the skin's surface pH (5.5), an acceptable formulation should have a pH value ranging from 4.0 to 7.0 [26,30,31], to avoid skin irritation [32]. Interestingly, the pH value of formulation C (1.5 wt.% FucoPol, 1.5 wt.% cetyl alcohol, and 3.0 wt.% glycerin) was within the optimal range from 6.59 ± 0.01 to 6.30 ± 0.01 during the whole 60-day study period, supporting its suitability for use as a topical cream. As shown in Figure 2b, the Zeta-potential of formulations A, B, C, E, and F was −193 mV, −98.4 mV, −97.9 mV, −160 mV, and −86.7 mV, respectively: with evident stability for formulation C during the storage period. The formulation is considered stable when the Zeta-potential value is more than +25 mV or lower than −25 mV [26]. However, some W/O emulsions are highly stable despite having low Zeta-potential values [47], such as formulation D, which showed a rapid aggregation regardless of its absolute Zeta-potential value (0.0 mV) [48]. The conductivity value, which is indicative of the number of free ions and water present in the system [26], is used to detect physical modifications [33] and to assess if the formed emulsion is an O/W or a W/O system [31,34]. As observed in Figure 2a, formulation A showed a significant increase in the conductivity value (from 102 ± 0.6 to 283 ± 2.0 µS/cm) after 7 days of storage, while for formulations B and C the changes were less significant (from 106 ± 0.3 to 122 ± 0.2 µS/cm, and from 109 ± 0.9 to 107 ± 0.7 µS/cm). Conductivity stability over the 60-day storage period (Figure 2a) indicated an absence of physical changes for formulations C, D, and F. Formulations A, B, and C presented higher conductivity values (>100 µS/cm) corresponding to an O/W system, indicating that the aqueous phase is the continuous phase of the system, whereas the oil phase is nonconductive [34]. Formulations D, E, and F (<50 µS/cm) are considered W/O systems. This result corroborates the emulsion determination test (Figure 3a), and the microscopic observation (Figure 3b), where formulations A, B, and C droplets dispersed on the filter paper, thus confirming their O/W nature [14,35,36]; and showed compartmentalized structures characteristic of O/W systems, consisting of dispersed oil droplets in the aqueous phase [14,37]. Thus, these results confirm that FucoPol forms O/W emulsions, in contrast to Sepigel ® 305 and stearic acid under the same conditions. In addition to acting as an emulsifying agent, FucoPol appeared to have a pH-lowering effect. Consumers prefer O/W emulsions due to their sensorial properties (easy to spread, non-greasy) [14,38,39] representing nearly 65% of the total emulsified products available in the cosmetic industry [16]. Rheological Assessment All formulations exhibited a similar shear thinning behavior to the torque response, as the viscosity gradually decreased under increasing shear rates ( Figure 4). The viscosity decrease under a shear rate is attributed, in shear thinning emulsions, to their semiflexible molecular structure [49]. Except for Formulation E (Figure 4e), all formulations exhibited a slight decrease in viscosity during the storage time. As mentioned before, formulations containing stearic acid became hard during storage, corroborated by the increase in viscosity values over the storage time. Cosmetic preparation stability over storage time is related to its tendency to exhibit changes in particle migration [29]. In fact, for the FucoPol-based formulations ( Figure 4a-c, Table 2), compared to formulation A (8.7 Pa.s), there was an increase of the viscosity to 19.5 Pa.s in formulation B with the addition of 1.5 wt.% cetyl alcohol ( Figure 4b); in formulation C ( Figure 4c) the addition of both glycerin (3.0 wt.%) and cetyl alcohol (1.5 wt.%) further increased the viscosity to 34.3 Pa.s. This demonstrates that, contrary to the result obtained in Section 3.2, glycerin and cetyl alcohol led to increased apparent viscosity. This may be due to the homogenization method applied (mechanical homogenization vs. manual homogenization) or to the upscale, from 5 g to 100 g, which possibly changes the behavior and efficiency of the ingredients [3]. Comparing formulations C (34.3 Pa.s) and F (6.2 Pa.s) ( Table 2), it is possible to conclude that, for the same emulsifier concentration (1.5 wt.%), FucoPol conferred significantly higher apparent viscosity than stearic acid. The formulations' physical stability was also assessed by measuring the droplet size during the storage period at room temperature (~20 • C). The distribution profile of oil droplets and their size influences the emulsion's stability, with smaller droplet sizes and lower PI values (<0.3) being responsible for higher stability [3,26,[40][41][42]. As shown in Figure 2b, all formulations presented a droplet size characteristic of macroemulsions (>0.1-50 µm), experiencing a considerable increase in droplet size after 30 days of storage. This effect was less evident for formulation D (3.17-9.63 µm), which contained a higher concentration of stearic acid (5.0 wt.%) compared to formulation F (1.5 wt.% stearic acid), which suggests that higher emulsifier concentration allows a decrease of the droplet size and, consequently, increased stability during storage [43]. At lower emulsifier concentrations, the droplet covering ability of the emulsion decreases, causing the coalescence of neighbor droplets that results in the formation of larger droplets [44]. Furthermore, non-ionic emulsifiers can reduce the droplet size of olive oil (triglycerides)-in-water emulsions [45]. For FucoPol-containing formulations, the addition of cetyl alcohol and glycerin (formulation C, Figure 2b), allowed for a decrease in the droplet size (8.68-40.0 µm to 6.12-24.2 µm) and a slight increase of the stability during storage, when compared to formulation A. In general, the droplet size of an emulsion is determined by the homogenization technique applied, the environmental conditions, and the ingredients used for its preparation [46]. Furthermore, there are some technical issues to obtain small droplet-size emulsions using polysaccharide-type emulsifiers [6]. The ideal monodisperse system should have a PI value lower than 0.3 [34,41], which was not verified in any of the formulations (0.47 ≤ PI ≤ 5.02 for t = 60 days) indicating considerable polydisperse droplet sizes. As shown in Figure 2b, the Zeta-potential of formulations A, B, C, E, and F was −193 mV, −98.4 mV, −97.9 mV, −160 mV, and −86.7 mV, respectively: with evident stability for formulation C during the storage period. The formulation is considered stable when the Zeta-potential value is more than +25 mV or lower than −25 mV [26]. However, some W/O emulsions are highly stable despite having low Zeta-potential values [47], such as formulation D, which showed a rapid aggregation regardless of its absolute Zeta-potential value (0.0 mV) [48]. Rheological Assessment All formulations exhibited a similar shear thinning behavior to the torque response, as the viscosity gradually decreased under increasing shear rates ( Figure 4). The viscosity decrease under a shear rate is attributed, in shear thinning emulsions, to their semi-flexible molecular structure [49]. Except for Formulation E (Figure 4e), all formulations exhibited a slight decrease in viscosity during the storage time. As mentioned before, formulations containing stearic acid became hard during storage, corroborated by the increase in viscosity values over the storage time. As shown in Table 2, all formulations showed solid-like behavior, with the storage module higher than the loss module (G′ > G″ at 0.1 Hz). This behavior was more pronounced in formulations D, E, and F, meaning that these formulations present a strong network [15,49,50] with higher stability. Formulations A, B, and C showed a weak gel rheological pattern with an increasing difference between G′ and G′′ values as the frequency increases from 0.01 to 10 Hz. This behavior indicated a dominance of the elastic components over the viscous components of the system, and that physical bonds between the macromolecules held the system's structure [51]. Figure 5 illustrates the structural stability for the first day and after 60 days of storage at room temperature. The formulations containing a synthetic emulsifier showed much higher values of G′ and G″ than FucoPol. The G′ and G″ modules of most formulations decreased with storage time, except for formulations E and F, suggesting more structured systems, which can influence the spreading behavior [3]. For formulations A and B, it was visible a crossover at 0.01 Hz at t = 1 day, while for t = 60 days the crossover occurs at higher frequencies (0.3 Hz for formulation A, and 0.03 Hz for formulation B). For formulation C, G′ gradually became bigger than G″ during the whole frequency range investigated (0.01-10 Hz) (see supplementary material, Figure S4). Similar behavior has been reported for bacterial cellulose emulsions [50]. Cosmetic preparation stability over storage time is related to its tendency to exhibit changes in particle migration [29]. In fact, for the FucoPol-based formulations (Figure 4a-c, Table 2), compared to formulation A (8.7 Pa.s), there was an increase of the viscosity to 19.5 Pa.s in formulation B with the addition of 1.5 wt.% cetyl alcohol ( Figure 4b); in formulation C (Figure 4c) the addition of both glycerin (3.0 wt.%) and cetyl alcohol (1.5 wt.%) further increased the viscosity to 34.3 Pa.s. This demonstrates that, contrary to the result obtained in Section 3.2, glycerin and cetyl alcohol led to increased apparent viscosity. This may be due to the homogenization method applied (mechanical homogenization vs. manual homogenization) or to the upscale, from 5 g to 100 g, which possibly changes the behavior and efficiency of the ingredients [3]. Comparing formulations C (34.3 Pa.s) and F (6.2 Pa.s) ( Table 2), it is possible to conclude that, for the same emulsifier concentration (1.5 wt.%), FucoPol conferred significantly higher apparent viscosity than stearic acid. As shown in Table 2, all formulations showed solid-like behavior, with the storage module higher than the loss module (G > G at 0.1 Hz). This behavior was more pronounced in formulations D, E, and F, meaning that these formulations present a strong network [15,49,50] with higher stability. Formulations A, B, and C showed a weak gel rheological pattern with an increasing difference between G and G values as the frequency increases from 0.01 to 10 Hz. This behavior indicated a dominance of the elastic components over the viscous components of the system, and that physical bonds between the macromolecules held the system's structure [51]. Figure 5 illustrates the structural stability for the first day and after 60 days of storage at room temperature. The formulations containing a synthetic emulsifier showed much higher values of G and G than FucoPol. The G and G modules of most formulations decreased with storage time, except for formulations E and F, suggesting more structured systems, which can influence the spreading behavior [3]. For formulations A and B, it was visible a crossover at 0.01 Hz at t = 1 day, while for t = 60 days the crossover occurs at higher frequencies (0.3 Hz for formulation A, and 0.03 Hz for formulation B). For formulation C, G gradually became bigger than G during the whole frequency range investigated (0.01-10 Hz) (see supplementary material, Figure S4). Similar behavior has been reported for bacterial cellulose emulsions [50]. Textural Assessment The textural parameter values (firmness, consistency, cohesiveness, and adhesiveness) of the prepared emulsified formulations are summarized in Table 3. In general, at the end of the storage time (60 days), a decrease in the firmness, consistency, and adhesiveness of the formulations was observed. However, there are some relevant considerations for the FucoPol-based formulations: the addition of glycerin and cetyl alcohol increased not only their apparent viscosity but also their firmness and cohesiveness. In fact, the addition of cetyl alcohol increased the firmness from 0.064 N (formulation A) to 0.162 N (formulation B), while further adding glycerin (formulation C) resulted in increased firmness (0.194 N). These results are concordant with the η values (Table 2), where formulation A exhibited lower apparent viscosity (8.72 Pa.s) than formulation C (34.3 Pa.s). Spreadability is an important texture parameter that infers on the product's contact with skin (i.e., how it feels on the touch) and ease of removal from packaging, which may affect utilization compliance [52,53]. This parameter is crucial in cosmetic emulsion development being a decisive factor for consumers' approval of products [14,54]. Formulation A at t = 1 day showed lower firmness (0.064 N) and consistency (0.261 mJ) values, indicating a more spreadable cream sample [14]. On the other hand, formulations C and E showed lower spreadability than the others. Consistency, a textural parameter directly influenced by viscosity, determines the cosmetic formulation application on the skin (higher consistency means a higher difficulty of application and vice-versa) [29]. In terms of adhesiveness, formulations B (0.467 mJ), C (0.387 mJ), and E (0.499 mJ) seemed to be more adhesive than formulations A (0.244 mJ), D (0.338 mJ), and F (0.317 mJ). For FucoPol-based formulations, glycerin and cetyl alcohol positively impacted the physical characteristics. These results are consistent with the rheology assays. Comparison of FucoPol-Based Formulation with Commercial Cosmetic Creams Formulation C (after 60 days of storage) was compared to several cosmetic products available in the market in terms of pH, conductivity, droplet size, physical stability (by centrifugation test), and rheological and textural parameters. In the centrifugation test to assess the physical stability, both Formulation C and Sephora ® hand cream showed phase separation. As shown in Table 4, Formulation C presented a pH value similar to Uriage ® Xémose (face cream) (6.68) but lower than the other tested commercial products, such as Shiseido ® primer (8.17) and Sephora ® hand cream (8.18). These values are higher than the optimal pH range (between 4.0 and 7.0) compatible with human skin. Nonetheless, the droplet sizes of Shiseido ® primer (22.0 µm) and Sephora ® hand cream (27.9 µm) are very similar to that of Formulation C (24.2 µm). In terms of rheological parameters, Formulation C and Uriage ® Xémose presented higher apparent viscosity values, 23.7 Pa.s and 25.9 Pa.s, respectively, and showed a similar viscoelastic profile to Shiseido ® primer. Uriage ® Xémose and Formulation C displayed very similar textural parameters, which suggests that Formulation C has adequate sensory characteristics for a face cream. Other polysaccharides presented similar behavior, as demonstrated by Miastkowska et al. [55], that developed a nanoemulsion gel containing 1.0 wt.% hyaluronic acid displaying a lower apparent viscosity (22.43 Pa.s at 1.0 s −1 ) when compared to tested market preparations (e.g., 55.58 Pa.s at 1.0 s −1 ) but higher spreadability. On the other hand, Danila et al. [56] found that higher concentrations of xanthan gum in the formulation (0.2-1.0 wt.%) resulted in higher apparent viscosity values [56]. In general, Formulation C seems to have suitable physical characteristics to be used in cosmetic products, being, in some cases, equal or superior to the tested commercial products. Table 4. Rheological parameters and textural parameters of commercial products tested. Apparent viscosity (η, measured at 2.30 s −1 ) and viscoelastic parameters (G , G ) measured at room temperature (~20 • C). G -storage/elastic modulus and G -loss/viscous modulus, at f = 0.1 Hz. Materials Olea europaea (olive) fruit oil was purchased from a local market. Olive oil is an antiaging ingredient indicated for dermatology applications due to its acidity, antioxidant activity, and soothing effect [9,24], preventing, for example, the appearance of stretch marks [25]. α-tocopherol (vitamin E) was acquired from Sigma-Aldrich (Munich, Germany). α-tocopherol is widely used as a cosmetic antioxidant ingredient, presenting an active role in anti-aging mechanisms, and acting as a coadjutant in atopic dermatitis and melanoma treatments [57,58]. Olive oil and α-tocopherol at concentrations of 20-30 wt.% and 1.0-5.0 wt.%, respectively, were used previously to prepare FucoPol-based emulsions [14]. FucoPol was produced by the bioreactor (Sartorius, Göttingen, Germany) cultivation (10 L) of Enterobacter A47 (DSM 23139) on glycerol-supplemented medium, as previously described [18], and extracted from the cultivation broth by ultrafiltration with a 30 kDa membrane, according to the method previously described [17]. FucoPol was composed of 40 mol% fucose, 29 mol% glucose, 24 mol% galactose, and 7.0 mol% glucuronic acid, with a total acyl group content of 11.6 wt.%. The sample had protein and inorganic salt contents of 8.2 wt.% and 4.0 wt.%, respectively. Other ingredients that were selected for the emulsions' formulation were cetyl alcohol, glycerin, triethanolamine (TEA), and methyl paraben. Cetyl alcohol is a long-chain alcohol [12] commonly used in cosmetics at concentrations of 0.1-5.0 wt.% [59], with no toxic effects, as a co-emulsifier [60], surfactant [61], thickener [62], and opacifying agent [63]. As a co-emulsifier, a cetyl alcohol concentration higher than 2.0% should be avoided to prevent a soaping effect [64]. Glycerin, responsible for the improvement of skin's smoothness and moisture [65], is used as a humectant in cosmetics at variable concentrations: 10% in face/neck products; 5.0% in body/hand products; 3.3% in moisturizing products [66]. TEA is used in cosmetics as a pH adjuster [67] and used in personal care products at concentrations between 0.0002% and 19% [64,[67][68][69][70]. Methyl paraben, a safe preservative ingredient found in most cosmetics products [38], can be used singly or in combination to enhance the antimicrobial effect, at concentrations below 0.3% [71,72], being normally a non-irritating and non-sensitizing ingredient [71]. Methyl paraben and cetyl alcohol were acquired from Sigma-Aldrich (Munich, Germany). TEA was acquired from Acros Organics B.V.B.A. (Geel, Belgium), and glycerin was acquired from Honeywell (Seelze, Germany). Factorial Design of Experiments Response surface methodology (RSM) [78] was applied to determine the best conditions for the development of cosmetic formulations stabilized with FucoPol. A three-factor central composite design (CCD) analyzed the effect of independent variables (Table 5) The mathematical relationship between the independent variables can be approximated by the second-order polynomial model equation: where Y is the predicted response; x i are the independent variables (n = 3). The parameter β 0 is the model constant; β i are the linear coefficients; β ii are the quadratic coefficients and β ij are the cross-product coefficients [14]. A full factorial design of experiments was obtained using the Design-Expert (Design-Expert ® software package from Stat-Ease Inc., Minneapolis, MN, USA). The validated model was plotted in a three-dimensional graph, generating a surface response that corresponds to the best emulsification index and apparent viscosity. Analysis of variance (ANOVA) was used to determine the regression coefficients of individual linear, quadratic, and interaction terms. Preparation of Fucopol-Based Emulsion Formulations Six formulations were prepared according to Table 6, including three formulations based on FucoPol as the main emulsifier (formulations A, B, and C) and three formulations based on stearic acid and/or Sepigel ® 305 as emulsifier agents (formulations D, E, and F). Formulation A was prepared with FucoPol as the sole emulsifier, while formulation B additionally contained 1.5 wt.% cetyl alcohol as the co-emulsifier. Formulation C was similar to formulation B but 3.0 wt.% glycerin was added to it as an emollient (Table 6). Three other formulations were developed using synthetic emulsifying agents and compared with the FucoPol-based formulations. Formulations D and F were similar to formulation C, but FucoPol was replaced by stearic acid as the main emulsifier at two concentrations, namely, 5.0 and 1.5 wt.%, respectively. Formulation E was similar to formulation F but the co-emulsifier cetyl alcohol was replaced by Sepigel ® 305 (1.5 wt.%). For preparing the emulsions, the oil phase (32.5 g) and the aqueous phase (67.5 g) were heated at 75 • C in a recirculated heated water bath Thermomix ® ME (B.Braun, Melsungen, Germany). The emulsification was performed by slowly adding the oil phase to the aqueous phase and mixing with a shear rate of about 11,000 rpm (IKA T25 easy clean digital ULTRA TURRAX, Staufen, Germany), for 3 min, followed by manual continuous stirring until room temperature was attained [3]. All formulations were prepared in batches of 100 g. Physicochemical Properties The organoleptic (color, odor, appearance) and macroscopic appearance of each formulation were visually analyzed. The EI was evaluated during the storage period (t = 1, 3, 7, 30, 60 days) as described in Section 2.3. The pH and conductivity were determined by dispersing the formulation sample in deionized water (10%, w/w) [34,79,80]. The emulsion type was determined as described by Baptista et al. [14], by placing a droplet of the test emulsion onto Whatman™ filter paper (0.2 µm, GE Healthcare Life Sciences, Munich, Germany) and observing the droplet's dispersion. For the microscopic observation, 10 µL of the sample was stained with 1% (v/v) Nile Blue A (Sigma-Aldrich, Darmstadt, Germany) and observed in a Zeiss Imager D2 epifluorescence microscope (Carl Zeiss, Oberkochen, Germany), with a magnification of 100×, through ZEN lite software (Carl Zeiss, Oberkochen, Germany). The physical stability was evaluated by centrifuging 1 g of the sample, at 4800 rpm, for 30 min [81]. Dynamic Light Scattering (DLS) was performed to determine the average particle size, the polydispersity index (PI), and the Zeta Potential, using a nanoPartica SZ-100V2 series (Horiba, Lier, Belgium) with a laser of 532 nm and controlling temperature with a Peltier system (25 • C). DLS measurements were performed by diluting the samples (1:10, w/w) in a disposable cell with a scattering angle equal to 90 • . Cumulants statistics data analysis was performed to determine the hydrodynamic size and polydispersity. Zeta Potential measurements were performed in a graphite electrode cell with a 173 • scattering angle [20]. Viscoelastic Properties The formulations' rheological properties were studied using an MCR 92 modular compact rheometer (Anton Paar, Graz, Austria), equipped with a CP35-2 cone-plate sensor system (angle 2 • , diameter 35 mm) and a P-PTD 200/AIR Peltier plate to keep the measurement temperature constant at 25 • C. Dynamic viscosity measurements were performed at shear rates between 0.01 and 1000 s −1 . Frequency sweep analysis was performed at frequencies ranging from 0.01 to 10 Hz, for a constant strain of 0.1-1.0% that was well within the linear viscoelastic limit evaluated through preliminary amplitude sweep tests [14]. Texture Analysis The firmness, consistency, cohesiveness, and adhesivity of the attained formulations were determined using a texture analyzer (TMS-Pro, Food Technology Corporation, Sterling, VA, USA) equipped with a 10 N load cell (Mecmesin, Sterling, VA, USA). The sample was placed in a female conic holder and compressed at 11 mm of depth (which represented a sample deformation of around 70%); this procedure was done twice by a male conic probe at a speed of 2 mm/s [14]. Conclusions This study demonstrates FucoPol's suitability for the development of emulsified formulations with good physical and chemical properties for their utilization as cosmetic creams. The fucose-rich biopolymer has shown to possess great potential to replace stearic acid as an emulsifier, resulting in emulsions with similar/better stability, viscosity, firmness, spreadability, and droplet size, which were also shown to be comparable to commercial creams. Although further tests must be done to fine-tune the formulations, the results obtained substantiate the relevance of FucoPol in the development of topical formulations.
2022-11-13T16:20:01.170Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "2c19bdfbae0fa7008ecb7e8815422fae1e28368c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/22/7759/pdf?version=1668656655", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b9c814f7366e6641e6818b608c9f25bb848e93b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
228986733
pes2o/s2orc
v3-fos-license
Semantic Broadening of the Word Sudah in the Spoken Use of Bahasa Indonesia in Sumba This is a descriptive qualitative study which aims at studying the use of the word sudah in the spoken use of Bahasa Indonesia in Sumba. According to Kamus Besar Bahasa Indonesia, sudah is an adverb used mostly to inform that something has already happened or that something has already been done. The position of sudah in phrases or sentences precedes the verb or adjective that it modifies. However, there is a different use of sudah in Sumba, and probably in mostly eastern islands in Indonesia, where this adverb is placed after the verb. The purpose of this research is to study the meaning brought by this new arrangement of sudah. The data was spoken use of Bahasa Indonesia collected through note-taking at campus, houses, and public places (market and stores). The data collected was then analyzed using agih method or meaning-analysis method. There were also 2 informants asked to get information on the function of the new arrangement. The analysis showed that the adverb sudah underwent the semantic or meaning broadening in its use. This study revealed there were 3 new meanings as the result of broadening process, (1) to give order or to ask other people to do something, (2) to invite other people to do something, and (3) to inform that something is about to happen or to be done soon. With these new meanings, the adverb sudah can also function as an adverb to show that something has not happened yet or something has not been done yet, the contrast of what is suggested in the dictionary. I. INTRODUCTION Language is always changing. Since the ancient time up to the present day, every language in the world has been evolved and developed into different form that we are using today (Al-Kadi & Ahmed, 2018;Baugh & Cable, 2005). This change can happen across time and space, across continent and social groups. The change also varies. It can be faster or slower, it can be in a great amount or it can be just a small part. Studies on language have shown that language change covers all aspects of language such as pronunciation change, morphology change, semantic/meaning change, and the invention of new words. Young generation in this millennium may find it difficult to understand what was written by their ancestors hundred years ago. The same thing must happen to the older generation that they cannot easily comprehend what the young generation is saying nowadays. For example, with the advance in technology industry, new words and terminologies are found related to computer and internet. For young generation, it is easy to grasp the meaning because they follow the trend. However, for older people born in 1900s, it will be very difficult since these new technologies were not the trend during their time in the past. As the technology replaces old things, it also somehow diminishes the use of old terms. "The most noticeable differences between generations are in vocabulary. What one generation called hi-fi, car phone, and studious young man or woman younger generation calls stereo, cell phone or mobile phone, and (in some instances) nerd (Finegan, 2004). Changes in language are not only about the coining of new words and disappearing of old words, but also about any other aspects of language. According to Hock and Joseph (2009), there are 5 types of language change: sound change, analogy, semantic change, syntactic change, and change resulting from language contact. Further, Hock and Joseph (2009) explained that sound change refers to the changes in pronunciation, analogy refers to the change in pronunciation of a word affected by other words, semantic change is related to the change of the word meaning, syntactic change is the change in the way words are arranged in a sentence, and the last type refers to the changes that occur when languages are in contact with each other resulting phenomenon such as borrowing, adoption, or adaptation. In terms of number of words owned by a language, English language now has around 470,000 words (Merriam-Webster Dictionary, 2016). This number is increasing compared to the number of words the language had in the 90s. The same fact also happens to Bahasa Indonesia which had 20,000 entry on its first dictionary. The language now has 127,036 words based on Kamus Besar Bahasa Indonesia (Kamus Besar Bahasa Indonesia, 2016). There are also changes in pronunciation both in English and Bahasa Indonesia. For the English language, there are Old English (OE) which was used from the year 449 to 1066 AD, Middle English used between 1100-1500, and Modern English pronunciation which has been used since 1500 up to present day (Fromkin et al., 2000). For Bahasa Indonesia, there are at least 4 official stages, (i) Ejaan Van Ophuijsen used in 1901-1947, (ii) Ejaan Republik used in 1947-1972, (iii) Ejaan yang Disempurnakan used in 1972-2015 Ejaan Bahasa Indonesia used since 2015 until now (Pedoman Umum Ejaan Bahasa Indonesia, 2016). Semantic change also plays an important part in the development of every language. Semantic change or semantic shift refers to the change of the meaning of words from time to time. "There is a general tendency for words to develop new meanings and to relinquish other meanings over time" (Aarts et al., 1993). Bloomfield (1933) and Campbell (1998) defined semantic change as a change in the concepts that were associated with a term and the innovations that change the meaning of words (in Maxilom, 2008). Murphy & Koskela (2010) stated a lexical item may develop additional or different senses from the existing ones it has before. In other words, meaning of existing senses of a lexical item may shift and give way for new senses as the old senses become superseded. In other case, the new meaning may develop along with and coexist with the new meaning (in Danzaki, 2015). Finegan (2004) stated that beside changes in sounds of words, the meaning of terms can also change. About 1000 years ago, the English word starve (Old English steorfan) meant simply 'die' (by any cause). However, the meaning has changed and in today's English. We use the term to refer to the deprivation and death caused by hunger. Another example is the noun meat which once referred to all kind of food, and the word flesh which had a wider meaning than at present, referred to both living flesh and dead flesh as food. Both words have underwent semantic change now. The word gay referred to 'lighthearted' or 'happy' in the past and in modern days, the word is used the most to refer to homosexuals or lesbians. There are basically several types of semantic changes which can be studied from 2 perspectives (Grondelaers et al., 2012). First is the onomasiological which focuses on a referent, an object or an idea, and analyzes the synchronically and diachronically varying ways of designating that referent. The emphasis is from the function to form. The second is semasiological, which focuses on a linguistic expression and investigates the synchronic and diachronic variation of the objects and ideas that are designated by that expression. In this case the emphasis is from form to function. In the latter perspective, which is the focus in this study, semantic change occurs when a particular form gains or loses a meaning. In this sense, "meaning changes, while form remains relatively constant" (Traugott, 2017). Traugott (2017) classified the language change into several types, (i) metaphorization, the conceptualization of a thing in terms of another, (ii) metonymization, the association of a thing usually in terms of contiguity, (iii) pejoration or the association of a term with negative meaning, (iv) amelioration or the association of a term with positive meaning, (v) narrowing: restriction of meaning, and (vi) generalization: extension of meaning or broadening/widening. Banks (2004) stated that the meanings of words have been in a constant flux. Words such as lady, which was once a title given in an aristocratic society in OE and ME which now has a somewhat pejorative connotation in PDE. Over the centuries, there have been few word meanings that have stayed the same. Some changes have occurred regionally and in different dialects; others have occurred because the next generation has placed a different connotative or, in some cases, denotative meanings on words. Changes occur for any number of reasons; they always have and presumably, they always will. Language is not a fixable entity, and this fact has been proven since OE. Generalization or the broadening / widening of the meaning of words refers to the phenomenon where the meaning of a word is increasing and becoming broader compared to what it meant originally. Hollmann used the word dog to give an example of broadening. In the past, dog used to refer to some specific large and strong breeds and not to any old dog (Hollmann, 2009). However, in modern day, the word dog has undergone what is known as generalization, widening or broadening. The word is now referred to a highly variable domestic mammal (Canis familiaris) (Merriam-Webster Dictionary, 2016), all kind of dog. In other language beside English, semantic changes also happen. A study on semantic change of the selected Cebuano words from the written texts and spoken language of Cebuano speakers living in Cebu province in the Philippines, revealed that metaphor was the dominant type of semantic change in the written text and broadening was frequentlyused in the spoken language (Maxilom, 2008). Danzaki (2015) analyzed three major forms of semantic change; expansion or broadening, narrowing or shrinking (specialization) and shift in Arabic loan words in Hausa language. This study revealed that "semantic changes by expansion and narrowing mostly occur due to generalization of a restricted sense and restriction of a generalized sense respectively". Lexical semantic changes also happen in Bahasa Indonesia, the official language of Indonesia. It is the standardized language and has been used as a lingua franca between ethnic groups in Indonesia, in formal education, government offices, mass media, and others. With hundreds of local groups, local languages or vernaculars are also used by its people. This use of local varieties of language indeed affects the use of Bahasa Indonesia, especially in terms of language or lexical semantic change. Semantic changes in Bahasa Indonesia have also been studied in several researches. Darheni (2011) showed that Indonesian vocabulary has undergone dynamic development and language change such as through broadening of the meaning (extension of meaning), a narrowing of meaning, pejoration, amelioration, synesthesia, and associations through the years. An example of the broadening in Bahasa Indonesia is the word karantina ('quarantine' in English), which was always associated with isolation of a person because that person is infected by a virus or similar kind of infectious disease. However, the word karantina in today's use of Bahasa Indonesia has more positive meaning than it used to have. It now refers to a place where people are kept together for training or capacity building activities. Nugraha (2018) The literal and extension meaning. The literal meaning of the lexeme of child is "second offspring" while the extension meaning approximately has seven meaning, i.e. 'urutan kelahiranʼ, 'manusia yang masih kecilʼ, 'binatang yang masih kecilʼ, 'pohon kecil atau tanaman yang tumbuh pada tumbuh-tumbuhan yang lebih besarʼ, 'orang yang berasal dari atau dilahirkan di suatu daerahʼ, 'orang yang termasuk dalam suatu golonganʼ, and 'yang lebih kecil daripada yang lainʼ ('order of birthʼ, 'a little human', 'a little animal', 'a small tree or plant growing on a larger plant,' one who comes from or is born in an area ',' a person belonging to a classʼ, and 'which is smaller than the others'). (c) Semantic scheme which shows the extension meaning from literal to the domain of extension. Sembiring (2013) analyzed the semantic/meaning extension of words in a newspaper Pontianak Post. This study revealed that in the political, social, and economic rubric of the newspaper, there were 153 words underwent meaning change, with 118 words experienced broadening/widening of meaning. For example, the word penyakit (illness) basically refers to 1) something that causes trouble in living things; 2) health problems caused by bacteria, viruses, or abnormalities in the physiological system or tissues in the organs of the body (in living things) (Kamus Besar Bahasa Indonesia, 2016). However, the use of the word penyakit in the newspaper shows that the word has wider meaning, which is the social disease in the society. Banks (2004) stated that the semantic changes can occur to any lexical categories, not only to noun and verbs, but also to adjectives and adverbs. The word sudah in Bahasa Indonesia is an adverb. Based on Kamus Besar Bahasa Indonesia, there are 8 meanings of the word (Kamus Besar Bahasa Indonesia, 2016). According to Payne (2011), "any full lexical word that isn't clearly a noun, a verb, or an adjective is often considered to be an adverb". An adverb basically functions as the head of an adverb phrase which modifies a verb (e.g. spoke quietly), an adjective (e.g. really awful), another adverb (e.g. very quietly), or, more rarely, a noun (e.g. the events recently). Adverbs traditionally are divided into various meaning-related categories, such as adverb of manner (e.g. hurriedly, sideways, thus), adverb of modality (e.g. perhaps, probably, certainly), adverb of time (e.g. later, never, often), adverb of degree or extent (e.g. exceedingly, very), adverb of frequency (e.g. daily) (Aarts et al., 1993) and also adverb of hedging (e.g. sort of, like) and place/location (e.g. over there, here) (Payne, 2011). Huddleston and Pullum (2005) mentioned that the term 'adverb' is based on the function of these words as modifiers of verbs. However, adverb is also used as modifiers to adjectives; and a good many modify other adverbs as well. We then can determine that adverb functions to modify all categories other than the noun. Although it can modify almost all categories, the only places an adverb can naturally be placed in a clause are at the beginning (Surreptitiously the dog watched the fluffy cat.), at the end (The dog watched the fluffy cat surreptitiously.), and at the major constituent boundary between the subject and the predicate (The dog surreptitiously watched the fluffy cat.). It can possibly occur between the verb and its object, but this is highly unnatural (Payne, 2011). Below are some examples of the use of the adverb sudah in sentences in Bahasa Indonesia based on its use in Kamus Besar Bahasa Indonesia. He is already good at reading. (c) Bapak sudah berangkat ke Jakarta. After that, he is called by his father. (e) Jangan pikirkan yang sudah terjadi. Don't think about what has happened. The sentences above show that sudah can be placed in a clause and what categories it modifies. In (a) and (c), sudah is located before the verbs (belajar, berangkat) to modify the verbs. In (b), it is placed in front of an adjective (pandai) to modify it. In (d), sudah functions as adverb of time to show what happen after a particular event / time, while in (e) it modifies the verb terjadi in the relative clause yang sudah terjadi. However, in the spoken use of Bahasa Indonesia in Sumba, the adverb sudah is also usually used in different arrangement, in which the adverb sudah is placed after the verb. It means that the word sudah may experience semantic or meaning broadening. This study aims at analyzing the use of adverb sudah, as it is used in the spoken language of Bahasa Indonesia in Sumba, and the meaning and functions that it carries. II. METHODS This study is a descriptive qualitative study. It descriptively elaborated the new meaning of the word sudah as the result of the broadening in the spoken use of Bahasa Indonesia in Sumba. The data was all oral use of language collected through note-taking for 3 months at several places such as campus, house, and public places such as market and stores. The data collected was then selected using meaning-analysis method. Agih method was applied to analyze parts of speech in the sentences. After that, for comparison, the selected data were rearranged to move the position of the word sudah in order to look at how the position of the word affects the meaning. There were 2 informants asked to explain the meaning of the sentences used as the data in this research in order to gain the meaning and see whether the meaning is different from what in the dictionary is. III. RESULT AND DISCUSSION This section focuses on the semantic change of the word sudah taken from the spoken/oral use (conversation) of people in Sumba island. Examples of the data would be shown before the presentation of the data. From the data provision, there were 23 sentences selected and analyzed to see the broadening of the meaning of sudah. The data were grouped based on the result of the interview with the informants. On the interview, the informants were asked about the whole meaning of each sentences. The result of the interview showed that the meaning of sudah can be divided into three groups as presented below. Mandi cepat sudah. Ordering someone to take a bath quickly. Ordering Ade to eat. Ordering someone to wake up. Kasih mati air sudah, sudah penuh. Ordering someone to turn off the water. Asking the mother to go first and the speaker will follow. 6. Kasih mati sudah itu komputer. Asking someone to turn off the computer. Tanda tangan sudah di bawah ini. Asking someone to sign on the paper. Kerja sudah itu tugas. Ordering students to do the assignment. 9. Bu, pesan sudah makanan untuk rapat. Asking someone to order the food for the meeting. Asking the team members to prepare documents. Ordering students in class to submit the assignment. 12. Jalan sudah, lampu sudah hijau. Asking someone to go because the green light is on. Bayar sudah cepat itu parkir. Asking someone to pay for the parking. Asking the girl to pick up the vegetables. Asking other people to take the biscuits and the speaker will pay. Asking someone to quickly choose the powder because the speaker wants to go home soon. Asking the seller (a man) to lower the price of vegetables to Rp.10.000. Inviting someone else to go home. 2. Mari kita jalan sudah sekarang Inviting someone else to go now. Mari kita buang sudah ini sampah di sana. Inviting someone else to throw the garbage somewhere else. To Inform that Something is about to Happen/to be Done soon 1. Ini saya print sudah kita punya laporan. Informing other people that the speaker is Semantic Broadening of the Word Sudah in the Spoken Use of Bahasa Indonesia in Sumba about to print the report soon. 2. Ini saya start motor sudah sekarang. Tunggu saya di depan ya. Informing someone else that the speaker is about to turn on the bike and asking the person to wait in front (of the house). 3. Karena kamu tidak pakai lagi, saya buang sudah ini kertas semua. Informing someone else that the speaker is about to throw away the papers because they do not use it anymore. In all the sentences presented in the data above, it can be seen that sudah is placed after the verb. Some sentences are full sentences with subject and predicate, some are without subject (which actually implied as the listener). The analysis took only the verb phrase (verb + adverb) which then was compared to the opposite arrangement of adverb + verb as the dictionary suggests. As it is mentioned previously, sudah can be placed before the verbs to modify the verbs, in front of an adjective to modify it and as adverb of time to show what happen after a particular event/time. In the data above, it is possible also to insert another category between the verb and sudah such as in (a) and (p) with adjective (cepat) and in (d) and (v) with noun (air, motor). However, this insertion does not change the imperative meaning of the arrangement. The phrase jalan sudah, buang sudah, and kasih mati sudah are used only once since they appear more than once in the data. 1b) Sudah mandi The phrase in 1a shows that the act of mandi or taking a bath has not happened yet. The speaker asks the listener to take a bath, while 1b provides the information of a condition that someone has taken a bath. 2a) Makan sudah 2b) Sudah makan 2a shows that the act of makan or eating has not happened yet. The speaker asks the listener to eat. However, 2b supplies the information of a condition that someone has eaten. 3a) Bangun sudah 3b) Sudah bangun The act of bangun or waking up in 3a has not happened yet. The speaker asks the listener to wake up. However, 3b provides the information of a condition that someone has woken up. 4a) Kasih mati sudah 4b) Sudah kasih mati 4a explains that the act of kasih mati or turning off (i.e. TV or music) has not happened yet. The speaker asks the listener to turn off, while 4b shows the information of a condition that something has been turned off. 5a) Jalan sudah 5b) Sudah jalan The phrase in 5a indicates that the act of jalan or going (to a direction) has not happened yet. The speaker asks the listener to go (to certain direction). In contrary, 5b shows the condition that someone has gone (going to a direction). 6a) Tanda tangan sudah 6b) Sudah tanda tangan In 6a, the speaker asks the listener to sign on a paper. The act of tanda tangan or signing has not happened yet. However, in 6b, the act of signing on the paper has been carried out. 7a) Kerja sudah 7b) Sudah kerja 7a shows that the act of kerja or working has not happened yet. The speaker asks the listener to work. However, 7b provides information that someone has worked or has done the work. 8a) Pesan sudah 8b) Sudah pesan The phrase in 8a indicates that the act of pesan or ordering (i.e. the food) has not happened yet and the speaker asks the listener to order the food. In contrary, the phrase in 8b indicates that something (i.e. the food) has been ordered. 9a) Siap sudah 9b) Sudah siap In 9a, the speaker asks the listener to prepare (i.e. document) and the act of siap or preparing has not happened yet. In contrary, 9b provides the information that something has been prepared. 10a) Kumpul sudah 10b) Sudah kumpul The act of kumpul or submitting (something) in 10a has not happened yet and the speaker asks the listener to submit (i.e. the assignment). While in 10b, the act of submitting has been carried out and that something has been submitted. 11a) Bayar sudah 11b) Sudah bayar The act of bayar or paying (using money to pay) in 11a has not happened yet and the speaker asks the listener to pay. In contrary, the act in 11b has been done (something has been paid). 12a) Angkat sudah 12b) Sudah angkat 12a shows that the act of angkat or picking up has not happened yet and the speaker asks the listener to pick up (i.e. the vegetables). 12b on the contrary provides the information of a condition that something has been picked up. 13a) Ambil sudah 13b) Sudah ambil In 13a, the act of ambil or taking (something) has not happened yet and the speaker asks the listener to take, while in 13b, the act has already happened or something has been taken. 14a) Pilih sudah 14b) Sudah pilih 14a shows that the act of pilih or choosing has not happened yet. The speaker asks the listener to choose. However, 14b shows the information that someone has chosen something. 15a) Kasih sudah 15b) Sudah kasih The act of kasih or giving (something) in 15a has not happened yet and the speaker asks the listener to give (something to listener or other people). In 15b, the act has been carried out and something has been given (to somebody). 16a) Pulang sudah 16b) Sudah pulang 16a shows that the act of pulang or going back home has not happened yet and the speaker asks the listener to go home. While in 16b, the act of going back home has been taken or that someone has gone back home. 17a) Buang sudah 17b) Sudah buang The act of buang or throwing something (as garbage) in 17a has not been carried out, while in 17b the act has been done. 18a) Print sudah 18b) Sudah print In 18a, the act of print or printing has not happened yet and the speaker asks the listener to print (i.e. document). However, 18b shows the act of printing has been done. 19a) Start sudah 19b) Sudah start The act of starting (i.e. motorcycle) in 19a has not happened yet and the speaker asks the listener to start. However, 19b shows that the action has not been carried out. The syntactic arrangement of the phrases in 1a-23a where sudah is placed after the verb clearly shows that the action of the verb has not been carried out or it has not happened yet. It is the opposite of the use of sudah as an adverb, based on the dictionary, to show that something has already happened or been done. The use of the former and new functions is shown in these examples below. 24a) Rina sudah makan. 24b) Rina, makan sudah! The sentence 24a, where sudah is placed before the verb makan, shows that the subject named Rina has eaten (i.e. the food or lunch), while 24b where sudah is placed after the verb makan provides the information that the subject Rina has not eaten (the food) yet and is being asked to eat. 25a) Rina sudah pulang. 25b) Rina, mari pulang sudah! In 25a, sudah is located before the verb pulang and indicates that the subject named Rina has already gone back home. However, in 25b, sudah is placed after the verb pulang and shows that the subject named Rina has not gone back home yet and being invited by the speaker to go home. For this function, sentences as in 25b usually make use of the particles such as mari and ayo which is used as interjection to invite people to do something. 26a) Saya sudah print dokumennya. 26b) Saya print sudah dokumennya. The sentence 26a, in which sudah is placed before the verb print, shows that the subject 'saya' (or 'I') have printed the document. But, the arrangement in 26b, where the verb print is located in front of the adverb sudah, indicates that the action of printing has not been carried out yet and the subject 'saya' as the speaker is informing the listener that she/he is about to print the document. It can be inferred that there is a general result of the broadening of meaning that sudah can also be used to refer to something that has not happened yet. This broadening also brings 3 new meanings of the word sudah as mentioned above, to give order/to ask someone to do something, to invite other people to do something, and to inform or to tell other people that the speaker is about to do something or that something is about to happen soon. Besides, the use of sudah for these new functions also implies that the speaker wants the listener to do something in the near future or as soon as possible after the time of speaking. It should be noted that the use of sudah with these new meanings usually found in spoken language of Bahasa Indonesia in Sumba Island and in most eastern Indonesia. It can also be found in written text on media social chats and conversation. However, it is hard to find it in the formal use of Bahasa Indonesia such as in printed textbook or newspaper. As it is used mostly for imperative function in spoken language, the tone of speaker voice also supports the meaning of sudah as a tool to give order. During the data collection, it is clear that when sudah is used for these new functions, the speaker's tone is higher to show the imperative meaning. IV. CONCLUSION This study analyzes the semantic broadening or the broadening of meaning of the adverb sudah in the spoken use of Bahasa Indonesia in Sumba. From the discussion, it can be inferred that the adverb sudah experiences the semantic broadening in its use. As it is mentioned in the dictionary of Kamus Besar Bahasa Indonesia, the adverb sudah is used to modify verb and adjectives and also to function as adverb of time to inform that an occurrence has already happened or that something has already been carried out. In the arrangement, sudah is placed before the verb or adjective which it modifies. This study reveals that in the spoken use of Bahasa Indonesia in Sumba, sudah is used for different function and carries different meaning through different arrangement in phrases or sentences. There are 3 new meanings brought by the broadening process (1) to give order or asking other people/listener to do something, (2) to invite other people to do something, and (3) to inform that something is about to happen/to be done soon. With these new meanings, the adverb sudah can also function as adverb to show that something has not happened yet or something has not been done yet, the contrast of what is suggested in the dictionary. This is also supported by the arrangement in phrases and sentences where sudah is placed after the verb.
2020-11-26T09:07:16.402Z
2020-10-29T00:00:00.000
{ "year": 2020, "sha1": "2a5352fe5b2a229b380dd9ed786d9f14c23d98c9", "oa_license": "CCBYSA", "oa_url": "https://www.ejournal.warmadewa.ac.id/index.php/jret/article/download/2331/1816", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e9d8c45394feb587dad127e8d881a3bfb250da7", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "History" ] }
118460368
pes2o/s2orc
v3-fos-license
Velocity-selective direct frequency-comb spectroscopy of atomic vapors We present an experimental and theoretical investigation of two-photon direct frequency-comb spectroscopy performed through velocity-selective excitation. In particular, we explore the effect of repetition rate on the $\textrm{5S}_{1/2}\rightarrow \textrm{5D}_{3/2, 5/2}$ two-photon transitions excited in a rubidium atomic vapor cell. The transitions occur via step-wise excitation through the $\textrm{5P}_{1/2, 3/2}$ states by use of the direct output of an optical frequency comb. Experiments were performed with two different frequency combs, one with a repetition rate of $\approx 925$ MHz and one with a repetition rate of $\approx 250$ MHz. The experimental spectra are compared to each other and to a theoretical model. I. INTRODUCTION The use of ultra-short pulses to perform precision spectroscopy has gained increased interest due to the development of phase-stabilized optical frequency combs (see Ref. [1] for a review of direct frequency-comb spectroscopy). One of the chief advantages of using the output of an optical frequency comb to directly excite atomic transitions is the spectral versatility afforded by the comb. Stabilized optical frequency combs of the types used here contain ∼ 10 5 − 10 6 optical frequencies spanning the entire visible spectrum. Thus, the optical frequency comb acts effectively as a large number of cw lasers with frequencies given by ω n = 2π(n f r + f 0 ), (1) where n is an integer denoting the mode of the comb, f r is the repetition rate of the laser, and f 0 is the carrier-envelope-offset frequency (for a review of frequency combs see, e.g., Ref. [2]). Two-photon direct frequency-comb spectroscopy with a fully stabilized comb was first applied to the study of cold rubidium atoms [3][4][5]. In these experiments independent control of f 0 and f r was used to explore the effects of resonant enhancement of the two-photon transition rate due to resonance with an intermediate state. Two-photon direct frequency-comb spectroscopy has also been applied to the study of room-temperature atomic cesium confined to a vapor cell [6]. The transitions studied in that work were excited via two-photon excita-tion through a resonant intermediate state using velocityselective excitation. Here, we further explore velocity-selective, two-photon excitation both theoretically and experimentally. In particular, we consider the effect of the repetition rate of the comb on the spectra. We identify two different regimes: one in which the repetition rate of the comb is larger than the Doppler width of the resonance corresponding to the first stage of the transition and one in which the repetition rate is smaller than the Doppler width of the resonance of the first stage of the transition. We present data for the 5S 1/2 → 5P 1/2 → 5D 3/2 and 5S 1/2 → 5P 3/2 → 5D 3/2,5/2 transitions in atomic rubidium taken with two different optical frequency combs with different repetition rates. We compare our results to a model calculation and see how the velocity selection process differs for the two cases. In addition, we explore how the energy of the intermediate state affects the twophoton spectra and explain the qualitative differences in the spectra that arise from excitation through different intermediate states. A. Velocity Selective Resonance The two-photon transition probability between two states |g and |f is given by [7] W (g, f ) = where v is the velocity of the atom, d is the electric dipole operator,ê 1(2) is the unit vector along the direction of the polarization for the first(second) laser beam of intensity I 1 (2) , γ k and γ f are the homogeneous line widths of the states |k and |f , respectively, ω g:k and ω g:f are the resonant angular frequencies of the transitions |g → |k and |g → |f , respectively. The sum over k includes all possible intermediate states. We have assumed that the laser frequency ω 1 is near the single-photon resonance of the |g → |k transitions and have neglected the possibility of ω 2 exciting the first stage of the transition. If we further assume that there is a single intermediate state that is close in energy to ω 1 we can reduce the sum to a single term consisting of the nearly resonant intermediate state. The maximum transition probability will occur when both the two-photon resonance condition and the single-photon resonance condition are met: When an optical frequency comb is used to excite an atomic transition, the transition rate must be summed over all of the optical modes in the comb [the frequencies of which are given by Eq. (1)]. Resonance will occur if there are any comb modes n 1 and n 2 resulting in frequencies ω 1 and ω 2 that simultaneously satisfy Eqs. (3) and (4). If we consider the situation where the comb light is counter-propagated through the atomic sample (so that k 1 = −k 2 ) we can write the resonance conditions as where we have defined the x axis to be the direction of propagation of the light and taken v x to be the x component of the atomic velocity. For a given n 1 , n 2 , and f 0 there exists a repetition rate, f r , which will satisfy the above equations for some value of v x . If the velocity class corresponding to v x that satisfies the resonance conditions is populated in the atomic distribution, there will be resonant excitation to the state |f in the atomic sample. In this work we have investigated this velocity-selective excitation with two different frequency combs having different repetition rates. The first situation we consider is the case where the repetition rate of the comb is larger than the Doppler width of the resonance corresponding to the first stage of the excitation. In this case there will exist, at most, one comb mode n 1 which is resonant with a velocity class present in the atomic sample. As a result, there will be at most a single pair of comb modes n 1 and n 2 that sum up to a total mode number n T = n 1 + n 2 that satisfy the resonance conditions given by Eqs. (5) and (6) for a velocity class present in the atomic sample. As the repetition rate is scanned the velocity class excited to the first stage of the transition changes as the frequency ω 1 sweeps over the Doppler broadened transition. Once the velocity class that is excited to the intermediate state |k also satisfies the resonance condition for ω 2 = 2π(n 2 f r + f 0 ), the atoms will be excited to the final state |f . This results in a sub-Doppler resonant peak. If the repetition rate is increased over a large enough range a different pair of comb modes, adding up to n T = n 1 +n 2 −1 will satisfy the resonance requirement and the resonant peak will recur. However, because the velocity class that satisfies the resonant condition will, in general, be different, the relative amplitude of the n T peak may vary greatly as compared to the n T peak; reflecting the relative population in the resonant velocity classes. The amount the repetition rate must scan before the spectrum repeats is If the laser beams are exactly counter-propagating throughout the excitation region then the width of the resonance in terms of the repetition rate, δf r , is approximately the natural line width of the excited state divided by the total mode number: . The second case we consider is that in which the repetition rate of the frequency comb is less than the Doppler width of the resonance corresponding to the first stage of the transition. In this case there are multiple comb modes resonant with the first stage of the transition for different velocity classes. Consequently, there are multiple pairs of comb modes (n 1 , n 2 ), having the same total mode number n T = n 1 + n 2 , that will lead to resonant excitation for different velocity classes. In general, the different velocity classes will be resonant at different repetition rates. The result is a spectrum that has a comblike structure as different pairs of comb modes contribute. Figure 1 (a) shows the excitation of the different velocity classes to the intermediate and final states for three different values of the repetition rate. The resonant repetition rate, f r,0 , is the repetition rate at which the largest number of atoms are are excited to the final state. This corresponds to excitation of a velocity class that is near zero. There are additional resonances when the repetition rate is below (above) the resonance value, leading to excitation of velocity classes that have negative (positive) velocity along the propagation direction of the laser beam that excites the first stage of the transition. The bottom trace shows the ground state velocity distribution for a rubidium vapor cell at a temperature of T = 373 K. As the repetition rate is varied the velocity classes excited to the intermediate state move through the distribution. When the repetition rate satisfies the resonant condition for a given (n1, n2) and for a specific velocity class atoms are excited to the final state. Figure (a) shows the case where the resonant repetition rate is different for the different mode pairs and velocity classes, leading to multiple peaks in the spectrum. The calculation is based on a three-level model where the frequencies of the intermediate and final states correspond to the frequencies of the 5D 3/2 state excited through the 5P 1/2 state. Figure (b) shows the case where the intermediate state is close to the halfway point of the energy between the ground and final states. In this case, the resonant repetition rate will be the same for the different velocity classes and different mode numbers that have the same nT = n1+n2. The frequencies used in the calculation correspond to the frequencies of the 5D 5/2 state excited through the 5P 3/2 state. The parameters used for the comb light correspond to those used in the experiment with the smaller repetition rate frequency comb (fr ≈ 250 MHz). The case where the repetition rate of the comb is less than the Doppler width of the first stage of the transition exhibits a significant change if the energy of the intermediate state is nearly halfway between the final and ground states' energy difference. This is the case for the 5S 1/2 → 5P 3/2 → 5D 5/2 transition in rubidium, where the intermediate state is only 1.5 THz from the mid-point of the ground-to-final-state energy separation. In this case, the different velocity classes satisfy the resonance conditions at nearly the same repetition rates. This can be understood by noting that the Doppler shift is wavelength dependent. As a result, the apparent spacing of comb modes in "velocity space" is similar if the wavelengths for the first and second stages of the transition are similar and are different for the two stages if the wavelengths are different. In the case of where the wavelengths are similar the spectrum is significantly simpler than the more general case described above and there is no comb-like structure in the spectrum. This situation is illustrated in Fig. 1 (b), where the energy levels were based on the 5S 1/2 → 5P 3/2 → 5D 5/2 transition in rubidium. The resonant peak contains contributions from multiple velocity classes that are simultaneously excited. Figure 2 shows the calculated fluorescence spectra as the repetition rate is scanned for the three-level model both for the case where the energy of the intermediate state is nearly half that of the two-photon energy (1.5 THz away) and for the case where the intermediate state is not close to the half of the two-photon energy (7.5 THz away). These values correspond to the 5S 1/2 → 5P 3/2 → 5D 5/2 and 5S 1/2 → 5P 1/2 → 5D 3/2 transitions, respectively. This dependence of the character of the spectrum on the intermediate state energy can be understood by considering two pairs of modes (n 1 , n 2 ) and (n 1 , n 2 ) = (n 1 + 1, n 2 − 1) that sum up to the same total mode number n T . We suppose that f r is the resonant repetition rate for a velocity class v x with mode numbers (n 1 , n 2 ) and f r is the resonant repetition rate for a different velocity class v x excited by frequencies with mode numbers (n 1 , n 2 ). The condition that the two resonant peaks overlap is that ∆f r = f r − f r < γ f 2π n T . Solving the resonance conditions [Eqs. (5) and (6)] gives where we have kept only terms first order in ∆f r and have assumed f r f 0 for simplicity. In the case of the excitation of the 5S 1/2 → 5P 3/2 → 5D 5/2 transition with a comb that has f r ≈ 250 MHz we have n 1 ≈ 1.536 × 10 6 and n 2 ≈ 1.544 × 10 6 , giving ∆f r ≈ 0.4 Hz. This is smaller than the resonant line width of δf r ≈ 0.9, where we have used The energies used correspond to those of the 5S 1/2 → 5P 3/2 → 5D 5/2 transition. The different velocity classes are all resonant near the same repetition rate, resulting in a single peak, consisting of unresolved contributions from different velocity classes. The bottom plot shows the calculated spectrum for the situation where the intermediate state is far from the mid point of the two-photon transition energy (7.5 THz from the mid point). The energies used correspond to those of the 5S 1/2 → 5P 1/2 → 5D 3/2 transition. The parameters used for the comb light correspond to those used in the experiment with the small repetition rate frequency comb (fr ≈ 250 MHz). Figure 3 shows an energy level diagram for the two stable isotopes of rubidium ( 85 Rb abundance 72.2% and 87 Rb abundance 27.8%). The transitions considered here are transitions to the 5D 5/2 and 5D 3/2 states through the 5P 1/2 and 5P 3/2 intermediate states. The nonzero nuclear spin of both isotopes (I = 5/2 for 85 Rb and I = 3/2 for 87 Rb) result in hyperfine structure for all of the states involved. Thus, there are a number of possible transitions leading to a complex and rich fluorescence spectrum. Including the hyperfine structure, the general two-photon transition rate becomes B. Two-Photon Transitions in Rubidium where M F , M F , and M F are the projections of the total angular momenta F , F , and F along the axis of quantization, γ n L J is the homogeneous line width of the state |n L J , and ω n L J F :n L J F is the resonant angular frequency of the transition |n L J F → |n L J F . The sum over J runs from 1/2 to 3/2, and the F runs over the hyperfine levels of the intermediate state. For given polarizations of the two beams, the Wigner-Eckart theorem can be used to relate the matrix elements to a reduced matrix element that is independent of the magnetic sublevels. The reduced matrix elements in the F basis can be related to the reduced matrix elements in the J basis using standard angular momentum relations (see, e.g., [8]). Using these relations, the spectra were calculated by inputting known values for the transition frequencies [9][10][11][12] and integrating the transition probability over the velocity distribution. The number of comb modes used for the calculation varied depending on the repetition rate being considered. Typically, 15-20 modes were used in the calculations that were done for the high repetition rate laser (f r ≈ 924 MHz) and 20-25 modes were used for the calculations for the lower repetition rate laser (f r ≈ 250 MHz). The calculation also accounted for depletion of the power for the first stage of the transition as the beam propagated through the vapor cell. The two-photon transition rate is proportional to the product of the intensities of the resonant laser fields. As a result it is advantageous to focus the light in order to increase the transition rate. This results in a localized region that provides the largest contribution to the fluorescence signal. However, if the atomic density is large enough, the power resonant with the first stage of the transition can be depleted through absorption before the focused region of excitation. Since the different resonances correspond to excitation of the different velocity classes this power depletion varies for the different resonance peaks. This effect was taken into account in the calculation. For a given resonant velocity class, v, the number of atoms resonant with the first stage of the transition was calculated using a Voigt profile based on the Doppler distribution and number density derived from the saturated vapor pressure. The depletion of the resonant intensity for the first stage of the transition was then modeled as where, I 10 is the undepleted intensity, σ 0 is the resonant scattering cross section for the transition of interest, ξ is the isotopic abundance, n(v) is the density of atoms in the resonant velocity class, and L is an effective scattering length. The effective scattering length was adjusted to provide good qualitative agreement with the experimental data. Power depletion of the light resonant with the second stage of the transition is unlikely to be significant since the population of the intermediate state is much less than the population of the ground state. III. EXPERIMENTAL SETUP Data were collected by use of two different experimental setups. One of them utilized a frequency comb based on a Ti:Sapphire laser located at Oberlin College. The other utilized a frequency comb based on a fiber laser located at California State University -East Bay. A. The Ti:Sapphire-Comb Experiment The frequency comb based on the Ti:Sapphire modelocked laser is similar to the "Standard Ti:Sapphire" resonators discussed in Ref. [13]. The mirrors of the cavity consist of two mirrors with a radius of curvature of 30 cm and two plane mirrors. The group-velocity dispersion (GVD) of the curved mirrors is compensated with a net negative group-velocity dispersion of −100 fs 2 over a bandwidth of 620-1050 nm. One of the plane mirrors also has negative group velocity dispersion of −40 fs 2 . The other plane mirror is 99% reflective and serves as the output coupler. The Ti:Sapphire crystal has an absorbance of α = 5.65 at 514 nm and an optical path length of 1.5 mm. The repetition rate of the laser is ≈ 925 MHz. The Ti:Sapphire resonator is pumped with 5.5 W of light from a diode-pumped frequencydoubled neodymium vanadate laser at 532 nm. The spectral bandwidth of the output of the Ti:Sapphire laser is ≈ 30 nm, centered at ≈ 780 nm with an output power of ≈ 550 mW. This light is coupled into ≈ 30 cm of a nonlinear photonic crystal fiber with a zero GVD wavelength at 790 nm. The light is spectrally broadened by the fiber to span ≈ 500-1100 nm. Approximately 200 mW of light is coupled through the photonic crystal fiber. The output of the fiber is split according to its spectral region by use of a short-pass interference filter. The spectral regions from 500-650 nm and from 1000-1100 nm are used for the stabilization of the carrier envelop offset frequency, f 0 , and the repetition rate, f r , of the optical frequency comb, while the remaining spectral region can used for spectroscopy. The carrier envelope offset frequency is detected via the self-referencing technique described in Ref. [14]. The infrared light near 1060 nm is doubled using a periodically poled lithium niobate crystal with a length of 1 mm that is thermally stabilized to 381 K. The doubled infrared light and the green light that is directly produced in the fiber are filtered by use of an interference filter centered at 530 nm with a pass band of 30 nm and detected with an amplified photodiode with a bandwidth of 2 GHz. The interference of the doubled infrared light with the directly generated green light results in a radio-frequency signal at the carrier-envelope offset frequency. The signal-to-noise ratio of f 0 is typically 40 dB in a 300 kHz resolution bandwidth. The carrier-envelop offset frequency is phase-locked to a signal generator using standard phase-locking techniques (see e.g. Ref. [15]). The line width of the offset frequency is typically < 500 kHz. The repetition rate is detected with the same detector that is used to detect f 0 . The output of the photodiode is split and filtered to select the repetition rate. The repetition rate is phase stabilized to a second signal generator using an rf mixer. The signal generators are referenced to a rubidium atomic clock that is steered to the global position system (GPS) resulting in a fractional frequency uncertainty of ≈ 10 −11 in 1 second. The 650-1000 nm light generated from the nonlinear photonic crystal fiber is passed through an interference filter to select the wavelengths of interest. For excitation through the 5P 3/2 intermediate state the light is passed through an interference filter centered at 780 nm with a 20 nm pass band. For excitation through the 5P 1/2 state the light is passed through an interference filter that transmitted light from ≈ 785-860 nm. This spectral range allows for excitation of the 5S 1/2 → 5P 1/2 → 5D 3/2 and the 5S 1/2 → 5P 3/2 → 5D 3/2,5/2 transitions. The light is focused with a 10-cm lens onto a rubidium vapor cell at ≈ 328 K. The vapor cell is a cylinder with a 25 mm diameter and a 75 mm length. The light is propagated through the curved wall of the vapor cell. A second lens with a focal length of 10 cm is used to re-collimated the laser beam. The light is retroreflected off a mirror and sent back through the vapor cell. A small fraction of the light is picked off for power monitoring by use of a 8% reflective pellicle beam splitter. Atoms excited to the 5D 5/2 state decay via cascade through the 6P 1/2,3/2 states, emitting light at 420 nm. This fluorescence is detected with a photomultiplier tube with an active area of 0.50cm 2 . An interference filter centered at 420 nm with a pass band of 20 nm and a shortpass filter at 650 nm are placed in front of the photomultiplier tube to reduce background and scattered light. The photocurrent from the photomultiplier tube is sent to a transimpedance amplifier with a gain of 10 7 V/A and a bandwidth of 20 kHz. The output of the amplifier is digitized with a analog-to-digital converter and recorded on a computer. The repetition rate of the optical frequency comb is stepped in 1 Hz intervals over a region of ≈ 1.5 kHz and the fluorescence is recorded for ≈ 1 second at each frequency step. B. The Fiber-Comb Experiment The frequency comb based on the erbium-doped fiber laser is a commercial system from Menlo Systems GmbH (FC1500-250-WG Optical Frequency Synthesizer) consisting of a femtosecond laser system (M-Comb) with repetition rate f r ≈ 250 MHz whose output is connected via fiber-optic cables to (1) a fast PIN photodiode that detects the fourth harmonic of f r (for stabilization/control of f r ), (2) an erbium-doped fiber amplifier (M-Phase EDFA) whose output is sent to an f −2f interferometer (XPS-WG 1500) used to measure the carrierenvelope offset frequency f 0 (for stabilization/control of f 0 ), and (3) a second erbium-doped fiber amplifier whose output is directed through a second-harmonic generating (SHG) crystal. The output power after the SHG crystal is typically ≈ 230 mW centered about 780 nm with a FWHM bandwidth of ≈ 15 nm. The laser light is then focused into a photonic crystal fiber (PCF) which spectrally broadens the output to span roughly from 600-900 nm. The power after the PCF is typically ≈ 85 mW. The fourth harmonic of f r detected by the fast PIN photodiode is mixed with a 980 MHz signal generated by a dielectric resonator oscillator (DRO) to produce a 20 MHz intermediate frequency, which is counted and further mixed with the ≈ 20 MHz output of a tunable low frequency direct-digital synthesizer (DDS), which enables adjustment of f r around its nominal value of 250 MHz. The DC signal resulting from the second stage of mixing is used in a feedback loop to lock f r by controlling an intra-cavity piezo in the M-Comb laser. All synthesizers and counters are referenced to a 10 MHz signal from a GPS-disciplined, ultrastable quartz oscillator (TimeTech Reference Generator) with relative stability better than 5 × 10 −12 in 1 second. The offset frequency f 0 , which is tuned to 20 MHz, is counted and fed into a lock-in detector. The latter is referenced to a 20 MHz signal directly generated from the clock signal by frequency doubling. The lock-in detector output signal is used in a feedback loop to lock f 0 by controlling the M-Comb laser pump power. The laser light spanning 600-900 nm generated by the PCF is then split into two beams with a 50/50 nonpolarizing beamsplitting cube. The beams pass through 10nm bandpass interference filters centered near the wavelengths of the transitions of interest. One of the beams (corresponding to the second stage of the two-photon transition of interest) passes through a chopper wheel which modulates the light at ≈ 250 Hz. Mirrors direct the two beams so that they are counterpropagating and are focused by antireflection-coated, 10-cm focal length bi-convex lenses at the center of a cylindrical Pyrex cell (25 mm in diameter and 75 mm in length) containing Rb vapor heated to ≈ 323 K. The beams enter perpendicular to the flat circular cell windows and counterpropagate along the axis of the cell. The focal point of the beams is at the center of the cell. Fluorescence from the cascade decay of the upper 5D 5/2 and 5D 3/2 states at 420 nm is monitored by an amplified photomultiplier tube (ThorLABs PMM01, active area diameter = 22 mm) fitted with an interference filter centered at 420 nm with a 10 nm pass band to reduce background light. The PMT is located to the side of the cell directly viewing the center of the cell where the counterpropagating beams are focused. Approximate alignment of the counter-propagating beams is achieved by directing the two focused beams through 100 µm pinholes without the cell in place. The alignment is subsequently improved by adjusting the position of the lenses to maximize the detected fluorescence signal when f r is tuned to a resonant value for the two-photon transition. The output of the amplified PMT is sent to the input of a lock-in amplifier (Signal Recovery Model SR7265) referenced to the chopper wheel. The demodulated output of the lockin amplifier (time constant = 2 s) is recorded as f r is stepped in 0.25 Hz increments every 4 s. The amplitude agreement is reasonable, given the assumptions of the model. We attribute the residual disagreement between the calculated and experimental amplitudes to issues related to the power depletion of the light exciting the first stage of the transition. The calculation includes this effect but assumes that the counterpropagating beams have common focal points which localizes the position at which the atoms are excited. If the focal positions of the two laser beams do not overlap perfectly, the location where the atoms are excited can vary depending on the relative powers of the two beams. This can combine with position-dependent fluorescence detection efficiency and affect the relative amplitudes of resonances from different velocity classes. We note that an effective optical depth of 1.0 cm was used in the calculation, which agrees well with geometrical size of the cell. The calculations were done with a transit line width of γ T 2π = 4 MHz. This value is somewhat larger than the estimated line width if γ T 2π = 500 kHz, based on a laser beam diameter of ≈ 80 µm. We attribute this additional broadening to imperfect overlap in the focal points of the counter-propagating beams. If the foci of the two beams do not overlap completely then there will be atoms for which the wave vectors of the two beams are not antiparallel. These atoms will have an additional broadening due to the imperfect cancelation of the Doppler shift. We explored the effect of wave vectors that were slightly misaligned by including a misalignment term in the calculation. The calculation reproduced broadening at the same level we observe in the experiment for values that are consistent with our limits of how well the foci of the two beams overlap. The calculations were done assuming linearly polarized light. For the data, the polarization of the the light was not controlled. However, data were taken with linearly polarized light and no differences in the spectra were observed. Trace (c) of Fig. 4 shows the calculated spectrum for 5S 1/2 → 5P 1/2 → 5D 3/2 transition for 85 Rb. The spectrum consists of three distinct groups. The first group on the left is due to excitation from the F = 3 → F transitions, while the second grouping corresponds to the F = 2 → F transitions. The last group at the high end of the spectrum corresponds again to the excitation of the F = 3 → F transitions, but with a new total mode number equal to one fewer than the mode number corresponding to the first group. The frequency spacing between the first and third group corresponds the the repetition interval predicted by Eq. (7), ∆f r ≈ 1.1 kHz. The relative amplitudes of the groups, as well as the amplitudes of the peaks within each group, vary significantly due to the different velocity classes that are excited by the different modes of the frequency comb. Such features are also evident in the 5S 1/2 → 5P 3/2 → 5D 5/2 transitions (Fig. 5). In both spectra the hyperfine states of the ground and excited states are clearly resolved while the intermediate hyperfine structure is not. This is in contrast to the data in Ref. [6] where the transitions through different intermediate states gave distinct peaks. This difference is a result of the relatively close energies of the two resonant photons. shows the calculated fluorescence spectra for the 5S 1/2 → 5P 1/2 → 5D 3/2 and 5S 1/2 → 5P 3/2 → 5D 3/2, 5/2 transitions. Traces (c) and (d) show the calculation for the 5S 1/2 → 5P 1/2 → 5D 3/2 transitions for 85 Rb and 87 Rb, respectively. Trace (e) shows the calculation for the 5S 1/2 → 5P 3/2 → 5D 3/2,5/2 transition for both isotopes. The amplitude of the transitions through the 5P 3/2 state relative to the those through the 5P 1/2 state were adjusted to reflect the differing light intensity transmitted through the filter. As with the Ti:Sapphire-comb experiment, the relative amplitudes of the different resonance peaks differ slightly between the calculated and experimental spectra. The position of the peaks agree well for the 5S 1/2 → 5P 3/2 → 5D 5/2 spectrum. However, for the 5S 1/2 → 5P 1/2 → 5D 3/2 spectrum there is some discrepancy at the 0.5 Hz level. We attribute this to the increased overlap and complexity of the spectrum. Shifts in the apparent positions of the transition resonances will occur as a result of the discrepancies in relative amplitudes between the calculated and experimental spectra. Power depletion of the light corresponding to excitation of the first stage of the transition was not found shows the calculated fluorescence spectra for the 5S 1/2 → 5P 3/2 → 5D 5/2 . Traces (c) and (d) show the calculation for the 5S 1/2 → 5P 3/2 → 5D 5/2 transitions for 85 Rb and 87 Rb, respectively. Trace (e) shows the calculation for the 5S 1/2 → 5P 3/2 → 5D 3/2 transition for both isotopes. This transition is nine times weaker due to the relative amplitudes of the reduced matrix elements and is also weighted by the branching ratio to the 6P states (38% for the 5D 3/2 state and 35% for the 5D 5/2 state based on the calculations of Ref. [16]). to improve the agreement between the data and calculation and was not included. This may be a result of a lower atomic density for the data taken with the fiber comb compared to that for the Ti:Sapphire comb experiment. The calculations for the 5S 1/2 → 5P 1/2 → 5D 3/2 transitions were done with a transit line width of γ T 2π = 2.7 MHz. Those for the 5S 1/2 → 5P 1/2 → 5D 3/2 transitions were done with a transit line width of γ T 2π = 4 MHz. These values are in reasonable agreement with the estimated based on a laser beam diameter of 15 µm, although additional broadening due to imperfect overlap of the foci of the counter propagating laser beams may have contributed additional broadening. The calculations were done assuming linearly polarized light. For the data shown, the polarization of the the light was not controlled. However, data were taken with linearly polarized light for the transitions excited through the 5P 3/2 state and these data did not display any differences from the data where the polarization was not controlled. For a comb with a repetition rate of 250 MHz, f r must be scanned by ∆f r ≈ 80 Hz in order for a given transition to recur in the spectrum. As can be seen from Eq. (7), this is a factor of the ratio of the repetition rates squared smaller than the repetition interval for the Ti:Sapphire comb. The widths of the resonance peaks decrease by a single power of the ratio of the repetition rates. As a result, there is more overlap of the peaks arising from different transitions. This is clearly observable in the comparison of 5S 1/2 → 5P 3/2 → 5D 5/2 transitions for the different combs, Figs. 5 and 7. However, in the case of the 5S 1/2 → 5P 3/2 → 5D 5/2 transition the individual peaks are still resolvable in most cases. Trace (f) shows the calculated spectrum for a single hyperfine transition, in this case the F = 2 → F = 3 for 85 Rb. As expected, there is only one resonant peak in the spectrum for this transition, a result of the near degeneracy of the two photons involved in the transition. As described above, the situation is quite different for the 5S 1/2 → 5P 1/2 → 5D 3/2 transitions. Trace (e) of Fig. 6 shows the calculated spectrum for a single hyperfine transition of 85 Rb (F = 3 → F = 2). There are multiple distinct velocity classes that contribute to the fluorescence signal and the resonance for each velocity class occurs at a different repetition rate. The result is multiple distinct peaks in the fluorescence signal. This results in a significantly more complicated fluorescence spectrum where there are numerous peaks overlapping peaks. shows the calculated fluorescence spectra for the 5S 1/2 → 5P 3/2 → 5D 3/2,5/2 transitions. Traces (c) and (d) show the calculation for the 5S 1/2 → 5P 3/2 → 5D 5/2 transitions for 85 Rb and 87 Rb, respectively. Trace (e) shows the calculation for the 5S 1/2 → 5P 3/2 → 5D 3/2 transition for both isotopes. This transition is nine times weaker due to the relative amplitudes of the reduced matrix elements and is also weighted by the branching ratio to the 6P states (38% for the 5D 3/2 state and 35% for the 5D 5/2 state based on the calculations of Ref. [16]) . Trace (f) shows the calculation for the F = 2 → F = 3 transition for 85 Rb. V. CONCLUSIONS We have presented a comprehensive investigation of velocity-selective two-photon direct frequency comb spectroscopy in atomic Rb. We have experimentally and theoretically demonstrated the effect of the repetition rate of a frequency comb on the two-photon excitation rate. The energy level structure of Rb also allowed us to explore the effect of the energy of the intermediate state on the two-photon excitation. The energy of the intermediate state is particularly important in the case where the repetition rate of the frequency comb is less than the Doppler width of the resonance corresponding to the first stage of the transition. This investigation demonstrates the benefits and challenges of direct-frequency-comb spectroscopy. While there is significant advantage in the wavelength versatility of the frequency comb, the details of resulting spectra can display complicated features resulting from the presence of the numerous frequencies. This effect is particularly pronounced in the case where the repetition rate of the comb is less than the Doppler distribution of the atoms. The presence of multiple resonances correspond-ing to excitation of a two-photon resonance from comb light with different mode numbers results in a dense spectrum. This effect will likely complicate any effort to make quantitative measurements of transition frequencies for atoms with a multiple hyperfine transitions. VI. ACKNOWLEDGEMENTS The authors would like to acknowledge Scott Diddams for assistance with the Ti:Sapphire oscillator, Lee Sherry and William Striegl for early contributions to the Oberlin frequency comb experiment, and Bill Marton for help with the construction of the apparatus. The Oberlin frequency comb experimented benefited from funding from the National Institute of Standards and Technology Precision Measurements Grant. The California State University -East Bay frequency comb experiment was supported by the National Science Foundation under Awards PHY-0958749 and PHY-0969666.
2012-06-05T17:38:38.000Z
2012-06-05T00:00:00.000
{ "year": 2012, "sha1": "768324dece9260082adea59e9e11047942666079", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1206.0999", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "768324dece9260082adea59e9e11047942666079", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27371324
pes2o/s2orc
v3-fos-license
Methicillin-resistant S . aureus colonization in intensive care unit patients : Early identification and molecular typing Introduction: Early detection of methicillin-resistant Staphylococcus aureus (MRSA) in colonized patients is very important for infection control procedures to prevent MRSA spread. We aimed to monitor MRSA carriage in intensive care unit (ICU) patients and to evaluate the speed and efficiency of conventional culture, immunological, chromogenic, and molecular methods together with genotyping. Methodology: Nasal and axillar swab specimens were obtained from patients in the ICUs of the general surgery and neurosurgery wards in a tertiary hospital once a week over four weeks between December 2009 and July 2010. Oxacillin and cefoxitin disk diffusion tests, oxacillin agar screening test, latex agglutination test, chromogenic agar, and real-time polymerase chain reaction (PCR) tests were used for isolation and identification of MRSA. MRSA isolates were typed using ribotyping and pulsed-field gel electrophoresis (PFGE) typing. Results: MRSA colonization was detected in 48 of 306 patients by real-time PCR. The MRSA colonization rate was 6.2%, 15.5%, and 38.5% at admission and in the first and second weeks, respectively. Sensitivity, specificity, positive and negative predictive values for all phenotypic tests were 98%, 99.6%, 98%, and 99.6%, respectively. The shortest handle time was observed in PCR. A total of three banding patterns were obtained from MRSA isolates by ribotyping, and PFGE analyses revealed 17 different pulsotypes varying from 11 to 18 distinct bands, showing high genetic diversity among the samples. Conclusion: Phenotypic MRSA screening tests in our study exhibited similar performances. The superiority of real-time PCR is its short Introduction Since methicillin-resistant Staphylococcus aureus (MRSA) is often resistant to multiple classes of antibiotics, it is an important agent of nosocomial infections.Nosocomial infections caused by MRSA have been associated with increased mortality and high healthcare costs [1].In hospitals, transmission occurs from a colonized or infected individual to others mainly via the hands of transiently colonized healthcare workers [2,3].Therefore, early detection of MRSAcolonized patients or healthcare workers is very important for infection control procedures to prevent the spread of MRSA.Clinical microbiology laboratories must choose an appropriate method to rapidly detect MRSA colonization in patients.Conventional microbiological culture methods have a diagnostic delay of three to five days for growth of organisms, while commercial selective agar-based methods yield the results within 18-24 hours.However, the molecular methods can yield results in 2-3 hours [4].The aims of this study were to monitor MRSA carriage in patients admitted to intensive care units (ICUs) and to evaluate the speed and efficiency of conventional microbiological culture, immunological, chromogenic, and molecular methods for identification together with genotyping of strains by ribotyping and pulsed-field gel electrophoresis (PFGE). Study design and samples This study was conducted in ICUs of general surgery and neurosurgery in a tertiary hospital between December 2009 and July 2010.Following the approval of the local ethics committee, two nasal and two axillar swab specimens were obtained from patients in the first 48 hours of admission.Later samplings were carried out once a week during the patients' four weeks in the ICU.The patients with MRSA or methicillin-susceptible S. aureus (MSSA) colonization were treated with topical mupirocin twice daily for five days, and isolation precautions were added to standard infection control measures for these patients. S. aureus ATCC 29312 and S. aureus ATCC 33593 were included in each run for quality control.In addition, S. aureus NRRL B 767 was used in each step of molecular studies as a control strain. Isolation and identification of S. aureus One of the swab specimens was inoculated onto 5% sheep blood agar.After incubation at 35°C for 18-24 hours, S. aureus was identified on the basis of colony morphology, Gram stain, catalase test, and tube coagulase test.Other specimens were used in MRSA polymerase chain reaction (PCR). Disk diffusion test of oxacillin and cefoxitin Bacterial suspension of each isolate was adjusted to the turbidity of 0.5 McFarland standards, spread onto Mueller-Hinton agar (MHA), and then 1 μg oxacillin and 30 μg cefoxitin disks were placed onto plates [5].After incubation at 35°C for 18-24 hours, inhibition zone diameters around the disks were measured. Oxacillin agar screening test One milliliter of bacterial suspension adjusted to 0.5 McFarland was inoculated on MHA plates containing 4% NaCl and 6 mg/L oxacillin [5].The plates were incubated at 35°C for 24 hours.In the presence of any colony on MHA plates, isolates were considered as MRSA. Latex agglutination test The mecA product (PBP2a) was detected using a commercial latex agglutination kit (Slidex MRSA Detection, bioMerieux, Marcy l'Etoile, France).The extract of an MRSA-suspected colony prepared by heating and centrifugation was mixed with latex particles sensitized with monoclonal antibody directed against PBP2a.Control included a suspension of unsensitized latex particles. Chromogenic agar At the same time, swabs were plated directly onto the selective chromogenic agar (Chrom ID MRSA Agar, bioMerieux, Marcy I'Etoile, France).After 18-24 hours of incubation at 35°C, the presence of greenpigmented colonies was considered as positive and no growth or colonies with other colors were considered as negative for MRSA. MRSA PCR The BD GeneOhm MRSA real-time PCR system (BD Diagnostics, Sparks, USA) was used.Other nasal and axillar swabs were transferred to the sample reagent buffer tubes and processed for cell lysis, and then DNA extraction was performed according to the manufacturer's recommendations.Three microliters of the lysed specimen were added to the PCR tubes containing 25 µL of the master mix.PCR was performed with a SmartCycler instrument (Cepheid, Sunnyvale, USA).Positive and negative controls were included in each run. Ribotyping The automated ribotyping was performed using a robotized instrument (Riboprinter Microbial Characterization System, Qualicon, Du Pont, Wilmington, USA) and the Riboprinter System Data Analysis Program.The procedure used for processing each sample is described in detail by the manufacturer.Briefly, the isolates were grown overnight at 35°C, suspended in buffer, heated at 80°C for 10 minutes, and lysed.The total DNA was restricted with EcoRI, electrophoretically separated, and transferred to a membrane, followed by hybridization.Ribotypes were recorded and numbered by the system. Approximately one-fourth of a plug was used for DNA digestion.Plugs were pre-incubated in restriction buffer for 30 minutes at room temperature, and then digested with 20 units of SmaI enzyme (New England Biolab, Ipswich, USA).DNA restriction fragments were separated in 1% agarose in 0.5X TBE using a Chef Mapper (Bio-Rad Laboratories, Hercules, USA) and pulse times were ramped from 5 seconds to 40 seconds for 19 hours.Gels were stained with gel red, visualized using an ultraviolet transilluminator, and photographed.Strains of S. aureus were placed in groups of identical or related strains by comparing the banding patterns produced, using a combination of photographic visual inspection and computer analysis (SPSS version 11.0, SPSS Inc., Chicago, USA) to create a similarity dendogram. A pulsotype (PT) was defined as a unique electrophoretic banding pattern.Strains with identical restriction profiles were assigned as the same type.The cluster cutoff was set at 78% similarity and all clusters were identified by Arabic numerals. Results A total of 306 patients (213 in ICU of general surgery and 93 in ICU of neurosurgery) were included in this study.The total numbers of S. aureusand MRSA-colonized patients were 97 and 48, respectively.S. aureus colonization was detected in 49 patients at admission to ICUs, and the MRSA rate was 6.2% (n = 19); 5.2% (n = 11), and 8.6% (n = 8) in patients of general surgery and neurosurgery ICUs, respectively (Figure 1).During the follow-up period of the remaining non-colonized patients for four weeks, there were 97 patients at the end of the first week, 26 patients at the end of the second week, 6 patients at the end of the third week, and 2 patients at the end of the fourth week in ICUs.The numbers of S. aureusand MRSAcolonized patients were 29 and 15 in the first week, and 16 and 10 in the second week, respectively.Although half of the hospitalized patients were colonized with MRSA in the third and fourth weeks, the number of them was very low for justification of high prevalence of MRSA colonization.However, no new MRSA infections were detected in the general surgery and neurosurgery ICUs during the study. When PCR was considered as the gold standard test for MRSA detection, all phenotypic tests exhibited similar performance results; discrepancies were observed for only three isolates (Table 1).Sensitivity, specificity, and positive and negative predictive values were found to be 98%, 99.6%, 98%, and 99.6%, respectively.The agreement between each phenotypic test and PCR was determined to be very high (about 98%) for both MRSA and MSSA (Table 2).However, the shortest handle time in laboratory was observed with PCR. Automated riboprinting was applied to the 48 strains of S. aureus to assess the genetic similarity of the strains in the ICUs of general surgery and neurosurgery.Restriction of the total DNA with EcoRI yielded about 10-12 fragments of 2-13 kb in size (Figure 2).A total of three banding patterns (ribogroups) was obtained among the isolates.The 47 MRSA isolates were confirmed as S. aureus, but one of the isolates was identified as S. haemolyticus by the ribotyping system.The latter isolate was excluded from typing studies.The differences in the ribopatterns were mostly located in bands between 3 and 11 kb in size (Figure 2).The number of ribotypes determined and the number of isolates representing them are shown in Figure 2. Ribogroup 3 contained only one isolate obtained from a male in the general surgery unit.However, ribogroups 1 and 2 contained isolates from both general surgery and neurosurgery units, indicating no relation between the sources of the isolates and ribotype patterns.A total of 47 S. aureus strains were typed using PFGE.All of the strains tested were typeable.The genetic analyses revealed 17 different PTs varying from 11 to 18 distinct bands in the range from 679 kb to 48.5 kb, showing high genetic diversity among strains (Figure 3).A dendogram that included all patterns was constructed on the basis of the similarity levels (Figure 4).A cut-off point of 78% of similarity was considered to define two main clusters.The dominant cluster (cluster 1) included PTs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, and 17, representing strains isolated from both general surgery and neurosurgery units.Cluster 2 included only PT 12, representing a strain obtained from the general surgery unit. Discussion In our study, while of patients had MRSA colonization at ICU admission, 15.5% of remaining patients in the first week and 34.6% in the second week acquired MRSA in ICUs; since the number of isolates in the third and fourth weeks was very few, it was not included in this discussion.Previously reported prevalence of MRSA colonization at ICU admission ranged between 6.7% and 11.0% [8][9][10][11].However, in countries where MRSA is endemic such as in India, MRSA colonization rate is higher [12,13].Marshall et al. [11] and Mathanraj et al. [13] reported that the strongest risk factor for acquisition of MRSA was length of stay in the ICU.These studies confirmed our results that the prevalence of unknown MRSA colonization at admission to ICU is high in settings with endemic MRSA transmission.Prolonged hospitalization times in ICUs have been associated with increased colonization rates [8,11].In highly endemic countries, routine surveillance for MRSA carriage in ICUs, with subsequent isolation of documented carriers, has been associated with reductions in MRSA infections in ICUs and across the hospital [14][15][16][17].Therefore, rapid and accurate detection and eradication of colonization will be extremely beneficial in preventing the nosocomial spread of MRSA.Rapid diagnostic tests may allow early identification of previously unknown MRSA carriers at ICU admission. There are several phenotypic tests for detection of MRSA from colonies isolated on routine media preferred by the Clinical and Laboratory Standards Institute (CLSI) [5].The cefoxitin disk screen test for detection of oxacillin resistance in staphylococci is preferred by the CLSI.For S. aureus and S. lugdunensis, the cefoxitin disk test is comparable to the oxacillin disk test for prediction of mecA-mediated resistance to oxacillin; however, the cefoxitin disk test is easier to read, and therefore, it is preferred method of the CLSI [18].The oxacillin agar screen method has also been recommended for confirmation of suspected strains by the CLSI.The studies evaluating these tests have showed that they have acceptable performance; in general, sensitivity (94.1%-100%) and specificity (87.4%-100%) ranges of the oxacillin disk diffusion method, cefoxitin disk diffusion method, and oxacillin agar screen test in these studies were found to be similar to those found in our study [19][20][21].In Baddour et al.'s study [22], the oxacillin agar screen and PBP2a latex agglutination methods were reported to be more sensitive than the oxacillin and cefoxitin disk diffusion methods, and cefoxitin disk diffusion was found to be the most specific method.Also, Valesco et al. [21] reported that cefoxitin disk diffusion and PBP2a detection were the most sensitive methods and that the cefoxitin disk was the best predictor of methicillin resistance in S. aureus strains among oxacillin, cefazolin, cefoxitin, cefotaxime, and imipenem disk tests, oxacillin Etest, oxacillin agar screening, and latex agglutination.However, these phenotypic methods require an additional 18-24 hours on standardized culture methods for results. Commercial chromogenic media and latex agglutination tests detecting PBP2a are alternative and cost-effective approaches to screening clinical specimens for MRSA carriage.Other advantages of these tests are shorter time for detection of MRSA, enhanced recovery, minimal labor, and no additional antimicrobial susceptibility or screening tests [4,23].In a study evaluating three different commercial chromogenic media, sensitivity and specificity values were found to be 83.8%-89% and 92.1%-98.6%,respectively [23].A commercial latex agglutination test was reported to show good correlation with PCR as a gold standard test and was an alternative method that could be used in most laboratories [24].However, Denys et al. [23] observed that experience is needed for the recognition of suspected colonies on chromogenic media, and they recommend follow-up confirmation of questionable colonies by a coagulase test or latex agglutination test and Gram stain to increase the specificity of MRSA interpretation [23].The use of a latex agglutination test along with a chromogenic medium has been shown to rule out false-positive results and increase specificity up to 99% [25]. Although it is very expensive and not practical for most routine clinical laboratories, a molecular test based on the detection of the mecA gene is considered as the gold standard test for methicillin resistance.[19][20][21]24].Compared with the chromogenic agar MRSA assay, PCR had sensitivity, specificity, and positive and negative predictive values of 100%, 98.6%, 95.8%, and 100%, respectively, and the mean PCR turnaround time was 14.5 hours [4].In a systematic review, Polisena et al. [3] found small differences in the MRSA colonization, infection, and transmission rates between screening using PCR and chromogenic agar, but the turnaround time and number of isolation days were lower for screening by PCR versus chromogenic agar.Thus, not only high performance but also short turnaround time has important advantages.Although culture-based MRSA screening tests have proven to be cheaper and more sensitive methods, the long time required to report the results remains a major problem; isolation and identification results are usually available at least 24 to 72 hours after sample collection.This time delay could allow MRSA cross-transmission.Therefore, a molecular MRSA detection test permits early identification of MRSA carriage in critically ill patients.It could help to improve MRSA control strategies, especially if it is linked to systematic onadmission screening and preemptive isolation of newly admitted patients [10].During this study, new cases of MRSA infection did not occur, probably due to early detection and eradication of the MRSA-colonized patients in the general surgery and neurosurgery ICUs.Wassenberg et al. [26] compared two different real-time PCR assays with conventional culture and showed that the number of isolation days was reduced by 44.3% with PCR-based screening at the additional costs of 327.84€ and 252.14€ per patient screened, and costs per isolation day avoided were 136.04€ and 121.76€. Controlling the spread of MRSA by screening patients, personnel, and the environment remains a high priority in infection control programs.Tracing the source and transmission routes of MRSA relies on typing methods as tools for the genetic characterization of isolates.PFGE has been accepted as the reference method for molecular strain typing of MRSA.PFGE is known to be highly discriminatory, and therefore it is frequently used for outbreak analysis [27].However, this strategy is labor intensive, time consuming, and technical instability has an adverse effect on reproducibility.Therefore, automated ribotyping may be used for genetic characterization of a high number of clinical isolates.The discriminatory power of ribotyping as an automatable technique for differentiation of bacteria for systematic, epidemiological, ecological, and population studies has been well reviewed previously [28].In this study, automated riboprinting was applied to the 47 strains of S. aureus to assess the genetic similarity of the strains isolated from different patients of the ICUs of general surgery and neurosurgery.All the strains tested were found to be typeable, and ribogroup 1 was dominant among the strains tested. It is known that PFGE-nontypeable isolates are found in samples from humans; however, in this study, all of the strains tested were typeable.The genetic analyses of 47 isolates revealed 17 different PTs, indicating high genetic diversity among the samples.A dendogram that included all patterns was constructed on the basis of the similarity levels defined two main clusters without any epidemiological indication among the strains. Conclusions MRSA colonization rate at admission to ICUs is high, and prolonged hospitalization times in ICUs have increased the colonization rate.Therefore, early, rapid, and accurate detection and eradication of colonized patients at admission to ICU may help to prevent the nosocomial spread of MRSA.Although commercial and conventional MRSA screening tests evaluated in our study exhibited similar performance results, the superiority of real-time PCR is that it has a short turnaround time compared with the required time of about 48-72 hours by agar-based tests.In this study, ribotyping was shown to be a fast and reliable method for identification, but the discriminatory power of PFGE compared to ribotyping for molecular strain typing of S. aureus remains the highest. Figure 1 . Figure 1.The distribution and colonization rate with S. aureus isolates.The percentages in third and fourth weeks were not included in this figure because of the number of hospitalized patients was very low in this period. Figure 2 . Figure 2. Ribotypes from S. aureus strains isolated from ICUs of general surgery and neurosurgery (ribogroup numbers were given by an automatic ribotyping system). Figure 3 . Figure 3. Restriction endonuclease digestion of total genomic DNA of strains representing some of the groups (pulsed-field gel electrophoresis [PFGE] profiles) of S. aureus restriction digestion with SmaI and separation by PFGE.Lanes show some of the pulsotypes.M, 48.5-1,000-kB concatamer ladder.The PFGE conditions were 1% (w/v) agarose gel in 0.5X TBE, switching pulses of 5 to 40 seconds at a period of 19 hours 6 V/cm. Table 1 . Performance of phenotypic MRSA detection methods compared with PCR. Table 2 . MRSA screening test results. MRSA: methicillin-resistant S. aureus; MSSA: methicillin-susceptible S. aureus; PCR: polymerase chain reaction; *Elapsed time for identification after the sample reached to the laboratory
2018-04-03T00:56:22.487Z
2016-05-31T00:00:00.000
{ "year": 2016, "sha1": "04c9d9d75c13134b616a9525b82a4def5accf78f", "oa_license": "CCBY", "oa_url": "https://www.jidc.org/index.php/journal/article/download/27249521/1507", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "04c9d9d75c13134b616a9525b82a4def5accf78f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
76472687
pes2o/s2orc
v3-fos-license
BREAST SELF-EXAMInATIOn AS A METHOD FOR EARLY DETECTIOn OF BREAST CAnCER BASED On LITERATURE REVIEW 11 Address for correspondence: Robert Walaszek, The University of Physical Education in Krakow, 31-571 Kraków, al. Jana Pawła II 78, phone: +48 12 683 12 27, e-mail: robertwalaszek63@gmail.com Tables: 2 Figures: 6 References: 45 Full-text PDF www.hpc.edu.pl Copyright © Pope John Paul II State School of Higher Education in Biała Podlaska, Sidorska 95/97, 21-500 Biała Podlaska Indexation: Index Copernicus, AGRO, ProQuest, Polish Medical Bibliography, Polish Ministry of Science and Higher Education. This is an open-access article distributed under the terms of the Creative Common Attribution Non-commercial license (http://creativecommons.org/licenses/by-nc/3.0), which permits use, distribution and reproduction in any medium, provided the original works is properly cited, the use is non-commercial and is otherwise in compliance with the license. BREAST SELF-EXAMInATIOn AS A METHOD FOR EARLY DETECTIOn OF BREAST CAnCER BASED On LITERATURE REVIEW Introduction Breast cancer in the nineties was the second most common cancer among Polish women. It is now the most common cancer, responsible for approx. 17.4% of all morbidity and approx. 22.2% of all deaths. Every year in Poland nearly 11,000 new cases of breast cancer are recorded and this number is constantly growing . In order to improve the effectiveness of treatment, screening tests that allow to diagnose the disease early in its development were introduced in oncology. In Poland since 2006 women aged 50-69 have been subject to screening mammography and this test is performed at two-year intervals, which is in line with the recommendations of the committee of experts of the EU (Didkowska 2011). Prerequisite for effective screening is mass, long-term nature and high quality of these tests (Humphrey et al. 2002). In countries that have introduced prevention programs, a decrease in mortality by approx. 15% is visible (Nelson et al. 2009). In Poland, reportability among women surveyed is low and in 2011 it amounted to 43.5% (Jokiel 2009). It is therefore necessary to spread knowledge about the prevention of breast cancer among women, training them on methods targeted at eliminating or reducing risk factors for breast cancer, as well as promoting healthy behaviors, including breast self-examination by women of all ages (Kaczmarek-Borowska et al. 2013, Tood, Stuifbergen 2012). This study is of illustrative nature and it has been written based on a query of Polish and foreign literature. The aim of this study is to present the methodology of breast self-examination, aiming to detect cancer lesions at an early stage of their development and provide an overview of the results of Polish research on the knowledge of breast self-examination techniques, awareness of women surveyed regarding the age at which breast self-examination should be started, the frequency with which a breast examination is performed, knowledge of risk factors for breast cancer or symptoms of breast cancer. Risk factors for breast cancer Based on numerous studies and long-term observations a number of factors were extracted that contribute to an increased risk of breast cancer (Fitzgibbons et al. 2000). One of the major risk factors predisposing to the development of breast cancer is the female gender, and age (Bouchardy et al. 2007). What is important for determining the risk is also family history and genetic predisposition (Rouzier et al. 2004). An important role in the pathogenesis of malignant tumor growth in breast is played by estrogen. Epidemiological studies confirm that increased exposure to endogenous and exogenous estrogens increases the risk of breast cancer (Narod 2001). Fewer menstrual cycles multiply the risk compared to women whose menarche appeared at a later age and who experienced early menopause (Leung et al. 2008). The age of the first childbirth, especially the first pregnancy after age 30 also has an impact on breast cancer risk (Xue et al. 2007). Currently, more often the relationship between the risk of cancer and the consumption of carbohydrates is recognized. It has been shown that there is a link between the consumption of products with a high glycemic index (GI), elevated insulin levels and insulin resistance and an increased risk of breast cancer (Jonas 2003). In studies involving 2,569 women with breast cancer a direct relationship between the consumption of carbohydrates with a high GI and the risk of cancer was reported (Tavani et al. 2006). It was also found that there is a link between obesity and the growth of breast cancer, particularly in postmenopausal women (Fair 2007). Several years of observations of a group of 1,500 patients diagnosed with breast cancer showed a significantly increased risk of death when BMI ≥ 30 kg / m 2 compared to those with a BMI <25 kg / m 2 (Dal Maso 2008). A pooled analysis of 8 cohort studies involving 340 thousand women showed an increased risk of breast cancer by 30% for BMI ≥ 28 kg / m 2 compared with BMI <21 kg / m 2 (Zatoński 2012). Researchers (Lorincz, Sukumar 2006) explain this increased amount of estrogen secreted from fat cells, predisposing to breast cancer. This confirms the higher level of sex hormones circulating in the blood of obese women, compared with women with normal body weight and is in women before and after menopause. Epidemiological studies confirm that alcohol abuse is a risk factor for breast cancer. It has been shown that drinking alcohol in postmenopausal remains in direct correlation with the growth of breast cancer which has not been demonstrated in women who are premenopausal (Singletary, Gapstur 2001). The results of the study indicate a relationship between hormonal state of the body, alcohol consumption and a predisposition to breast cancer (Li Chi et al. 2006). Carcinogenic factor in the case of ethanol the effect of this compound on endogenous steroid hormones, metabolites which lead to the generation of free radicals, either directly damaging DNA (Singletary, Gapstur 2001). The glycemic index -is defined as average percentage increase in blood of glucose after eating, by a statistically representative group of people, serving contains 50 grams of digestible carbohydrates. The increase in blood sugar when you eat 50 grams of glucose as the base scale (100%). The glycemic index is calculated from the formula: (concentration of glucose in the blood after consumption of a product containing 50 g carbohydrate) / (concentration in blood glucose after ingestion of 50 g glucose) • 100% Anatomy of the breast Women's breasts are made from the skin, subcutaneous tissue, blood vessels, lymph vessels and nerves. Located on the chest wall lying at the level of the III to VI or VII rib ( Fig.1) They are adjacent to the back surface of the pectoralis major muscle fascia, fascia pectoralis minor and in the side to the front toothed muscle fascia. Inside the breast is composed of the breast gland which consists of 15 -20 tapering lobes of glandular tissue, which are arranged radially around the nipple. Wart is surrounded by a circular shell nipple, characterized by strong pigmentation, where there is an outlet of the modified sebaceous glands, the so called Montgomery glands (Fig.2). A proportion of the female population has an additional piece of mammary gland called the tail of Spence within their armpit. Within the flaps one can distinguish smaller structures called lobules or lobes. They consist of groups of milk-secreting glands surrounded by connective tissue (Tortora, Derrickson 2008). Ściana klatki piersiowej-wall of chest Mięśnie piersiowe-pectorals Płat ciała sutka-body lobe breast Brodawka sutkowa-mamilla Otoczka brodawki sutkowej-areola of mamilla Przewód mleczny-lactiferous duct Ciało tłuszczowe sutka-fat body of nipple Skóra-skin Each lobe contains a milk line, which extends into a bay milk through its outlet on the nipple. These gulfs with sectional order from 5 to 8 mm and an average length of 12 mm end at the base of the nipple stenosis. The areas between lobes of glandular tissue are filled with fat. It also creates a protective layer around the mammary gland (Tortora, Derrickson 2008). Methodology of breast self-examination According to the recommendation of Polish Gynaecological Society on prevention and early diagnosis of breast gland changes, it is advised in case of women above 20 years old to do a breast self-examination regularly once a month. Menstruating women should perform the examination on the second or third day after the menstruation, however, pregnant women and women after the menopause should always do it on the same day of a month (Spaczyński 2005). Self-control of a breast consists of a visual examination and a palpatory examination. The visual examination is done in a standing position in front of a mirror. Woman watching her breast should: hold her upper limbs along the torso (Fig. 4 a), raise her upper limbs (Fig. 4 b), clutch her upper limbs behind the head (Fig. 4 c), rest her upper limbs on hips (Fig. 4 d). During the examination in positions listed above, we examine the outline and symmetry of mammary glands, skin changes such as red marks, ulceration, spreading of hypodermic veins, the decortication or linear skin rupture, the presence of callosities, tubercles or tubercle masses, the retreating of a nipple, the secretion from the nipple a pathological discharge, moving the nipple in relation to the teat line (Lewandowski 2007). The palpatory examination is done in a sitting or lying position. The examination in a lying position is essential to check the quadrants of a breast gland. In 50% cases a breast tumour is found in a superiolateral quadrant. Almost 20% of tumours are located within the limits of a nipple or a nipple areola, 15% in a superioparacentric quadrant, 11% in an inferolateral quadrant. The fewest, only 6% of tumours are located in an inferoparacentric quadrant (Fig. 5). The examination is done by circular movements, pressing the breast flat arranged fingertips II to IV (Lewandowski 2007). During the examination in a sitting position we conduct such movements as: a transverse stroking of a breast gland (Fig. 6 a) and a spiral stroking of the breast gland from the nipple to breast circumference. The right breast we examine in a clockwise direction, the left one -in counter clockwise direction (Fig. 6 b),a radial stroking of the breast gland from the nipple to the breast circumference ( Fig. 6 d),a radial chafing of the breast gland from the nipple to the circumference (Fig. 6 e) and the breast gland pressing in order to check the nipple discharge ( Fig. 6 f). We finish the examination checking the state of lymph nodes (Lewandowski 2007). The review of research results on a breast self-examination The assessment of women's knowledge on breast self-examination was the subject of numerous research which is shown in the references (Tab. 1). Numerous observations and studies show that the participation of health service workers in education on a breast cancer prevention is insignificant (Tab. 2). In connection to it, the scheme of the breast self-examination should be widely propagated among women, manly by GPs. Indeed the active participation of every woman in the nipple cancer prevention mainly depends on information from doctors (Karczmarek-Borowska et al. 2013). Table 1. The review of studies on the assessment of women's knowledge on breast self-examination Publication The Summary The review on conscious prevention of a breast cancer made by authors of this paper shows that vast majority of women in Poland knows a lot about breast self-examination techniques (over 80%), but considerably less women know when this examination should be started and with what frequency it should be done (around 47% and 40% respectively). Large group of women knows about factors increasing the risk of a breast cancer. The childlessness (80%), genetic factors (60%), unhealthy lifestyle, stress, wrong diet, stimulant consumption, overweight (from 37 to 53%) predispose to breast cancer. Among minor factors hormonal disorders (42%), to less physical exercises (24%), early menstruation and first pregnancy in a late age (17 and 9% respectively) are mentioned. Polish women declare in surveys that they know quite a lot about the breast cancer symptoms. First of all they mention the presence of hard, painful tubercles inside the breast limits (85%) and the discharge from the nipple (64%). Not so often they point to the skin changes around the nipple (60%) and changes of the breast shape and symmetry (60%). More seldom the respondents point to the enlargement of nymph nodes in the armpit (35%). On the basis of epidemiological studies, the American National Cancer Institute assessed that the risk of a breast cancer is increasing with age (Horner et al. 2009). Incidents of a breast cancer which occur at the age below 35 are estimated at only 1-3% cases. Considerable increase of incidents number of a breast gland cancer takes place after the age of 50. Percentage of women who are reported as ill with this cancer at the age 50-59 is as high as 32% of all the cases (Mamrocka-Mączka 2013). Healthy behaviours are related to the level of knowledge the society has about the diseases and the methods of their prevention (Dobrzyń et al. 2003). From analysis of available papers and carried out research it follows that the women's stage of knowledge about the breast self-examination requires continuation on a wide scale of information actions in order to cause changes in attitudes to one's own health. The early discovery of pathological changes in a breast gland by women belongs to cheap and simple methods of early diagnosis of cancerous changes and in this way it results in a decrease of the risk of having this cancer and increases the chances of recovering from it. It is noticed that an average diameter of a tumour detected by women who examine their breast on a regular basis is 12 mm, and an accidentally detected tumour in women who never did a self-examination has an average diameter of 40 mm (Karczmarek-Borowska et al. 2013). Studies by Fostera, Constanza (Fostera, Constanza 1984) showed that 5 years survival of patients with a breast cancer was 18% higher in case of women doing a self-control in comparison to women not doing this examination. In light of research the level of knowledge doesn't depend on age or education. This is why the knowledge about methodology of breast self-examination should be spread, which is a very important element of a cancer prevention. Conclusion The knowledge of the breast cancer prevention should be far more propagated, more trainings on the risk factors ought to be conducted and also women's healthy behaviours in the scope of early breast cancer diagnosis should be promoted, paying special attention to breast self-examination.
2019-03-13T13:30:33.380Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "505da744911b6d064ff91d96303e74bbb4a1752b", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-99/pdf-26884-10?filename=Pages%20from%20hpoc%20nr%204%20druk-2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "381355b428f27048dc17e750f98c9369d797205f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201990231
pes2o/s2orc
v3-fos-license
Diagnostic Accuracy of Anticarbamylated Protein Antibodies in Established Rheumatoid Arthritis: A Monocentric Cross‐Sectional Study Objective To evaluate the diagnostic accuracy of anticarbamylated protein antibodies (CarP), alone and in combination with traditional biomarkers (rheumatoid factor [RF] and anticitrullinated peptide antibodies [ACPA]), in established rheumatoid arthritis (RA). Methods A commercially available enzyme‐linked immunosorbent assay (ELISA) kit was used to assess CarP concentrations in serum samples of 200 established RA and 206 controls (115 healthy donors and 55 patients with other rheumatic diseases). Main outcome measures were sensitivity, specificity, and area under the curve (AUC; 95% confidence interval [CI]). Difference in accuracy was evaluated by comparison of the respective AUCs. Results A serum CarP cut‐off of 1.47 ng/ml or more differentiated patients with RA from controls with 30% sensitivity, 97.1% specificity, and good accuracy (AUC[95%CI] = 0.83[0.79‐0.86], P < 0.0001). However, it showed moderate diagnostic accuracy in seronegative RA patients: sensitivity 17.9%, specificity 96.9%, and AUC (95% CI) = 0.69 (0.63‐0.75). The diagnostic accuracy of CarP_ACPA and CarP_RF combinations was significantly superior to that of ACPA and RF alone (P < 0.0001 and P = 0.015, respectively), but not to that of ACPA_RF combination (P = 0.089) In addition, the CarP_ACPA_RF combination did not improve the diagnostic accuracy of the ACPA_RF combination (AUC mean difference [95% CI] = 0.006 [−0.001 to 0.015], P = 0.10). The number of positive autoantibodies (0, 1, 2, or 3) was not significantly associated with moderate‐severe disease (Disease Activity Score‐28 [DAS‐28] > 3.2) in adjusted multiple regression analysis. Conclusion CarP has good diagnostic accuracy in established RA but not in seronegative RA. The addition of CarP to ACPA and RF alone or in combination does not significantly enhance the diagnostic accuracy of ACPA_RF combination. INTRODUCTION Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease that affects synovial joints and leads to bone damage, disability, and excess of mortality (1,2). Although the pathogenesis of RA is largely unknown, chronic inflammation is thought to be the result of immune-mediated mechanisms in subjects harbouring a genetically favourable substrate (1). Despite continuing efforts to identify new diagnostic biomarkers, early diagnosis of RA remains a challenging and highly individualized process. The 2010 American College of Rheumatology (ACR)/European League Against Rheumatism (EULAR) classification criteria for RA included autoantibodies (rheumatoid factor [RF] and anti-cyclic citrullinated peptide antibodies [ACPA]) as biomarkers of the disease (3). However, a sizeable subgroup of RA patients is negative for both ACPA and RF (the so-called seronegative RA) (4). Therefore, there is an urgent need to develop simple and affordable biomarkers for the accurate diagnosis of RA, especially in the early phase of disease and in seronegative patients. Among candidate markers of RA, antibodies against carbamylated proteins (CarP) have been extensively studied in recent years. CarP are described in the preclinical (5) and early phases of RA (6) and are associated with severe disease (7), bone erosions (8), and all-cause mortality (9). Of note, CarP were shown to be positive in seronegative RA patients (10). A good accuracy of CarP has been demonstrated in different cohorts of RA patients (10)(11)(12), but its usefulness in the diagnosis of RA in routine clinical practice is uncertain (13). In particular, there is a paucity of data about the additive value of testing CarP over and above ACPA and RF to classify RA patients as well as the diagnostic accuracy of CarP in patients lacking these traditional antibodies. Regueiro et al, reported only a limited value of testing CarP in addition to traditional biomarkers for the classification of early arthritis (14). Accordingly, in a recent meta-analysis, the combination of CarP, ACPA, and RF with respect to ACPA and RF alone showed a significant, although modest, increase in specificity (at the cost of a loss of sensitivity) in the prediction of RA in individuals at risk, but no significant improvement in the classification of patients with established RA (15). Based on this background, we sought to further explore the contribution of CarP testing, alone and in addition to ACPA and RF, for the classification of RA in a large monocentric cohort of patients with established RA compared with healthy controls and patients with other rheumatic diseases (RDs). PATIENTS AND METHODS Patients and controls. Established RA patients satisfying the 2010 ACR/EULAR classification criteria (3) consecutively enrolled in the BIOmarkers of Subclinical Atherosclerosis in RA-The Bio-RA study between October 2015 and November 2018 were included. We also enrolled an age-and gender-matched control population that included healthy donors (HDs), referred to the blood donors bank of the Azienda Ospedaliero-Universitaria of Sassari (Italy), and consecutive patients with RDs referred to the rheumatology outpatient's clinic of the Azienda Ospedaliero-Universitaria of Sassari (Italy). In RA patients, the following disease-specific scores, disease descriptors, and treatment data collected on the day of the inclusion in the Bio-RA study were available for analysis: C-reactive protein (CRP) concentrations, erythrocyte sedimentation rate (ESR) values, Disease Activity Score-28 (DAS-28), Health Assessment Questionnaire (HAQ) score, current steroid use, daily steroid dose in prednisone equivalent mg/day, current treatment with synthetic disease-modifying antirheumatic drugs (DMARDs), and current use of tumor necrosis factor-α-inhibitors or other biological DMARDs. The Bio-RA study was approved by the Ethics Committee of the Azienda ASL 1 of Sassari (Italy) (2219/CE-2015) and was conducted in accordance with the Declaration of Helsinki. Informed consent was obtained from each study participant. CarP test, ACPA, and RF. CarP were detected using a quantitative, commercially available enzyme-linked immunosorbent assay (ELISA) kit (Novatein Biosciences) according to manufacturer's instructions. ACPA were detected using a second-generation ELISA (anti-cyclic citrullinated peptide) kit (Delta Biologicals) while immunoglobin M RF was determined as part of routine analysis by immunonephelometry (Behering) according to the manufacturer's instructions. The cut-off for each antibody was set as the mean + 2 Standard Deviations (SD) in the control group. Statistical analysis. Results are expressed as mean values (mean ± SD) or absolute number and percentages (n [%]). Statistical differences between groups were assessed using unpaired Student's t-tests or the Mann-Whitney rank sum test, as appropriate. Differences between categorical variables were evaluated by the chi-squared test or Fisher exact test as appropriate. Correlations between variables were assessed by Pearson's correlation or Spearman's correlation as appropriate. The ability of the different tests to discriminate between RA and controls as well as between RA, RDs, and HDs was assessed using receiver operating characteristic (ROC) curve analysis. Selection of the optimal cut-off values for sensitivity and specificity of the combination of different tests was made according to the Youden Index. Positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (+LR), and negative LR (−LR) were also calculated. AUCs of different tests, alone and in combination, were compared with the nonparametric method by DeLong et al (16). Multiple regression analysis (ENTER method) was also performed to evaluate the association between the number of positive antibodies and the severity of disease. A P ≤ 0.05 was considered statistically significant. Patients and controls. A total of 200 patients with established RA and 206 controls (151 HD and 55 patients with RDs) were studied. The subgroup of RDs included 14 patients with systemic sclerosis, 14 with systemic lupus erythematosus, 12 with Sjogren's syndrome, 4 with ankylosing spondylitis, 6 with psoriatic arthritis, and 5 with osteoarthritis. As expected, according to the RA epidemiology, the female gender was the prevalent one. Age and gender distribution were similar between patients and control groups by matching as per protocol (Table 1). RA patients had a relatively long disease duration (mean 9.48 years), moderate mean disease activity (DAS-28 = 3.87 ± 1.1), and were mostly under immunosuppressive and anti-inflammatory treatment at the time of assessment ( Table 3). Accuracy of CarP for the diagnosis of established RA. Serum cut-offs for CarP, ACPAs, and RF were 1.47 ng/ml or | 435 greater, 4.57 UI/ml or greater, and 73.7 UI/ml or greater, respectively. CarP serum concentrations were significantly higher in RA patients than in the whole group of controls (2.75 ± 4.63 vs 0.32 ± 0.57 ng/ml, P < 0.0001) ( Table 1). CarP serum concentrations were also significantly higher in RA when compared with controls subgroups taken singularly: (CarP in RA = 2.75 ± 4.63 vs CarP in HD 0.49 ± 1.01 ng/ml and vs CarP in RDs 0.26 ± 0.26 ng/ml, P < 0.0001 for all comparisons) ( Table 1). CarPs were positive in 60 (30%) subjects from the RA group vs only 6 (2.9%) of controls (P < 0.0001) ( Figure 1A and Table 1) giving a sensitivity and a specificity of CarP for established RA, respectively, of 30 and 97.1% ( Table 2). The CarP test resulted positive in only three healthy subjects and in three patients from the RDs group affected by the systemic sclerosis (SSc) (Figure 1B). Accuracy of CarP for the diagnosis of established RA was good with AUC (95%CI) = 0.830 (0.790-0.866), P < 0.0001 (Figure 1D and Table 2). PPV, NPV, +LR, and −LR of CarP testing for the diagnosis of established RA were 90.9, 58.8, 10.3, and 0.72, respectively ( Table 2). Correlation analysis between CarP positivity and RA features. We found no significant differences in serum CarP concentrations according to demographic and clinical characteristics of RA patients (Table 3). Mean DAS-28 values were significantly higher in ACPA+ versus ACPA− RA patients. However, in bivariate correlation, we found no association between autoantibody positivity and values of DAS-28 greater than 3.2, which indicates moderate-severe disease. Moreover, in multiple logistic analysis adjusted for demographic factors and immunosuppressive therapy, the number of positive autoantibodies (0, 1, 2, or 3) was not significantly associated with the presence of moderate-severe disease ( Table 4). DISCUSSION Although the diagnosis of RA is still based on clinical grounds, the demonstration in sera of specific autoantibodies is of significant diagnostic value and may also have prognostic implications. A plethora of biomarkers have been studied for the diagnosis of RA (17,18), but apart from ACPA and RF, no commercial test is currently available in clinical practice. In this study we expanded the current evidence about the performance of the CarP test for the diagnosis of estab-lished RA. We demonstrated that a commercially available CarP test has a good accuracy (AUC > 0.8) for the diagnosis of established RA. However, although the test specificity was good (97.1%), its sensitivity (30%) was not satisfactory: this suggests that this commercially available CarP test does not perform well in ruling out RA, as confirmed by the low NPV and -LR values. Our results are in line with those of a recent meta-analysis reporting pooled sensitivity and specificity of dif- ferent CarP tests for the diagnosis of RA at 42% and 96%, respectively (19). In the present study, we were also focused on understanding whether testing CarP over and above ACPA and RF may add some diagnostic benefit. Therefore, we specifically looked at the accuracy of different combinations of CarP with ACPA or RF, or both. The comparison of the accuracy of three different combinations of CarP (CarP_ACPA, CarP_RF, and CarP_RF_ACPA) did not show significant differences with respect to the ACPA_RF combination. This suggests that the incorporation of CarP in routinely ordered tests (ACPA and RF tests) is not useful for the diagnosis of RA. In agreement with our results, Regueiro et al (14) showed that the incorporation of the anti-CarP antibodies into different combinations with ACPA and RF in the ACR/EULAR classification of RA resulted in only a modest increase in sensitivity (2.2% higher) at the cost of decreased specificity (8.1% lower). Moreover, no data reporting the cost-benefit ratio of adding CarP to conventional autoantibodies for the diagnosis of RA have been published to date. Therefore, based on our data and the available evidence, the incremental value of testing CarP for the diagnosis of RA is unclear. Therefore, we evaluated whether CarP testing may be of some diagnostic benefit in this group of RA patients. In the stratum of ACPA and RF seronegative RA patients, the CarP test demonstrated low sensitivity (17.9%), high specificity (96.9%), and only moderate accuracy (AUC < 0.7), which suggests that CarP is not useful in seronegative patients. Of note, a low rate of CarP positivity was observed in the control group of RDs: 5.8% of patients with SSc (20) and 28.3% of patients with systemic lupus erythematosus (21). CarP-positive patients from the SSc group (three patients) all had a history of a chronic seronegative RA-like nonerosive arthritis. It is therefore conceivable that CarP positivity may be associated with joint inflammation also in other connective tissue diseases. Despite some data reporting a significant association between CarP and a severe course of RA (7), we did not observe significant differences in DAS-28 mean values, CRP, ESR, and HAQ between CarP+ and CarP− RA patients. Moreover, in multiple logistic analysis, we found no association between the number of positive autoantibodies and presence of moderate-severe disease (DAS-28 > 3.2). Some limitations of our study should be described. First, the cross-sectional nature of our study and the absence of radiographic data did not allow us to evaluate the presence of an association between CarP levels and severe, progressive, and erosive course of RA disease. Second, we enrolled patients under immunosuppressive treatment at the moment of CarP testing: although not documented to date, a negative effect of treatment with immunosuppressants on serum concentrations of CarP cannot be ruled out. Third, we should also consider the bias in the assessment of CarP performance introduced by the inclusion of RF and ACPA in the 2010 EULAR classification criteria. We selected these criteria because of the lack of complete x-ray data. However, it should be also emphasized that the use of the 1987 RA classification criteria might also have biased the results, although to a lesser extent, because of the inclusion of the RF (14). Last, because of the small sample size of the "other RDs" group, no firm conclusions could be drawn regarding the prevalence of CarP in other RDs. In conclusion, our data confirmed a good performance of CarP for the diagnosis of established RA. However, the additional value of CarP over conventional ACPA and RF biomarkers for the diagnosis of RA appears minimal.
2019-09-09T18:39:15.197Z
2019-08-08T00:00:00.000
{ "year": 2019, "sha1": "aaaeb003912ac15bf4b77678c023ea672c22b76e", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acr2.11063", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd6a94aaf3ea51253c4c72398b0842febf1e2a0c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245056350
pes2o/s2orc
v3-fos-license
Effect of nano pour point depressant on the flow properties of the waxy crude oil from Changqing Oilfield In this paper, graphene oxide (GO) was modified with alkyl amidopropyl diethanolamine to obtain a nano pour point depressant (GO-PPD), which was used to improve the flowability of the waxy oil extracted from Changqing Oilfield, China. Fourier transform infrared (FTIR), differential scanning calorimetry (DSC), polarized optical microscopy (POM) and viscometer were employed to evaluate the performance of the GOPPD. The results showed that compared with traditional pour point depressant (PPD), the GO-PPD exhibited higher performance in promoting the flowability of waxy crude oil. With the presence of 500 mg/kg GO-PPD in the waxy crude oil, the pour point of which could be reduced by 5.5 °C. Also, with the presence of 500 mg/kg GO-PPD, the viscosity reduction rate of the waxy crude oil can reach up to 52% at 30 °C. Through the observation via polarized microscopy, we have also found that with the introduction of GO-PPD in the crude oil, the formation of the wax crystals can be greatly retarded. This confirmed that the graphene oxide derivates could also be served as PPD, which facilitates the flowability of certain crude oil (e.g., waxy crude oil from Changqing Oilfield). Introduction Paraffin-wax deposition on the walls of wells and pipelines poses a great challenge to oil & gas production [1,2]. Paraffin wax is a major component in some hydrocarbons extracted from special reservoirs (e.g., shale reservoirs), and also is the main component of diesel and other refined products. The main components of the wax are the linear and branched hydrocarbon molecules which usually have more than 16 carbons and less than 40 carbons [3][4]. n-alkanes with higher molecular sizes would converse into the wax at higher temperatures if no wax inhibitor is injected into the production liquids. The formation of wax would reduce the production rate, and also lead to the blockage of the pipelines [5][6][7]. The traditional methods to mitigate the risk of pipeline blockage caused by the wax deposition including the injection of chemical additives into the flowlines of the extracted hydrocarbons [1,8]. Usually, the chemical additives can be split into two types: crystal modifiers and wax dispersants [9][10][11]. The crystal modifiers are usually oil-soluble copolymers, which can interact with the nucleus of the wax crystals and inhibit the deposition of the wax crystals [12]. Also, the wax-like segment in the crystal-modifier would participate in the formation of the wax crystal and herein modify the morphology of the wax [12]. Other segments (e.g., branched hydrocarbon chains, amide groups, etc.) in the molecule of modifier do not cocrystallize with the original paraffin waxes in the crude oil. The non-wax-like segments provide steric hindrance on the wax surface that retard the growth and aggregation of the crystals [7]. There are many publishes claiming that the size of the wax crystals can be significantly reduced by the polyaminoamide additives [2,12]. However, it should be noted that there are many factors that influence the performance and application of the polymers, for example, the high costs of the preparation process, the low performance at high shearing conditions and the unignorable performance loss at reheating conditions. The wax dispersants are usually surface-active molecules that can adsorb on the surfaces of wax and pipeline walls. Due to the adsorption layers on the pipeline inner surface, the wettability of these surfaces are kept in water-philic conditions which would reduce the adhesion force between crystal and the inner surface of the pipeline [13]. The common crystal modifiers are ethylene vinyl copolymers (EVA), poly (ethylene-butene) copolymers (PEB), poly (maleic anhydride amide co-α-olefin) (MAC) and their derivatives. The content of vinyl acetate (VA) in a molecule (polar group content), the molecular weight, the length of the side hydrocarbon chain and the compositions of the crude oil are the main factors that affect the performance of the crystal modifiers [14][15][16][17]. The optimum content of VA in a modifier (e.g., EVA) is ~30% [18][19]. The EVA molecules can alter the wax crystals from plates to spheres. It was reported that the MAC was more effect than PEB in crude oils containing a high amount of asphaltenes [10]. Singhal et al. [20] summarized two empirical rules for selecting the crystal modifiers: the carbon numbers of the side chain length of the alkyl ester and the wax component should be equal or close to each other, the melting points of the modifier and the wax should be equal or close to each other. Dispersants are different kinds of surfactants, such as sulfonates surfactants [21], alkyl phenol derivatives, polyamides and naphthalene [13]. The water cut may have a significant effect on the performance of the wax dispersants. The crystal modifiers and wax dispersants have attracted much attention from researchers, but their performance still needs to be further improved. In recent years, the nano pour point depressant has been researched by many researchers to mitigate the risk of pipeline blockage caused by the wax formation and deposition [22][23]. Compared with traditional crystal modifiers and wax dispersants, the nano pour point depressants exhibited higher performance in reducing the pour point and improving the flowability of the waxy oil. However, the working mechanisms of the nano pour point depressant have not been fully understood. In this paper, graphene oxide (GO) was modified with alkyl amido propyl diethanolamine to obtain a nano pour point depressant (GO-PPD), which was used to reduce the pour point and improve the flowability of the waxy oil extracted from Changqing Oilfield, China. Fourier transform infrared (FTIR), differential scanning calorimetry (DSC), polarized optical microscopy (POM) and viscometer were employed to evaluate the performance of the GO-PPD. Experimental section 2.1 Materials The waxy crude oil was offered by Chnagqing Oilfield, China. The SARA information of the crude oil is listed in Table 1. The SARA components of the crude oil were separated based on the method described in one of our previous works [2]. The commercial pour point depressant, EVA with a molecular weight of 2000 and vinyl content of 28%, was purchased from Aladdin Biochemical Technology Co., Ltd. (Shanghai). The nano material with a purity of 99% was also purchased from Aladdin Biochemical Technology Co., Ltd. (Shanghai). N, N-bis-(2-aminoethyl) dodecanamide was prepared in our lab. preparation of hybrid pour point depressant GO-PPD The GO-PPD was prepared by the modification of GO by amine (N, N-bis-(2-aminoethyl) dodecanamide). In this paper, the GO was reacted with an amine at vacuum conditions (10-2 Torr) and 180 °C for 2 hours. After the reaction, the product (GO-PPD) was obtained by removing the amine residues at vacuum conditions and 150 °C for at least 1 hour. Viscosity measurement The viscosity properties of the crude oil with and without the introduction of PPD were evaluated using a Viscometer (Brookfield II, USA) at various temperatures. Each sample was measured three times to increase the quality of the results. Pour point measurement The pour point of each waxy crude oil sample with and without the PPD was measure at a certain temperature range of 5-50°C based on the standard of ASTM D5853 (Standard Test Method for Pour Point of Crude Oils). Thermogravimetric analysis (TGA) of the crude oil The TGA test was performed using a Mettler Toledo A851 TGA/SDTA instrument. During each test, 3-6 mg oil sample was placed on the pans of the instrument. Then the sample was heated from 35 °C to 500 °C at a rate of 10 °C/min under the N2 environment (with a flow rate of 20 mL/min). DSC test of waxy crude oil samples The DSC analysis of the oil samples were carried out using a DSC apparatus (Mettler-Toledo DSC822e, Switzerland). In each measurement, 6-8 mg oil sample was placed in the pans of DSC and then the sample was heated (at a rate of 11°C/min) from room temperature to 50°C in the N2 environment (with a flow rate of 20 mL/min). At 50°C, the sample was kept for 5 min to remove the effect of memory effect of wax formation. Then the sample was cooled from 50 °C to -20 °C at a rate of 8 °C/min. During the cooling stage, the change in the heat was recorded. Microscope observation of the morphology of wax crystals The saturates of the waxy crude oil was extracted [2] and placed on the glass slide of a polarized microscopy (BX41-P OLYMPUS, Japan) to observe the process of formation and morphology of the wax crystals. Before each test, the saturates sample was heated to 50 °C to melt the already formed wax crystals. Then the sample was cooled from 50 °C to 10 °C to facilitate the formation of the wax crystals. During the observation, the temperature of the copper stage placed on the microscopy was kept at 10 °C. Results and discussion 3.1 Thermogravimetric analysis of crude oil sample Fig 1 shows the TGA curve of the crude oil sample at the temperature range of 35 °C -500 °C. It can be found that as the temperature ramps to 350 °C, the mass loss of the sample reaches 86.75%, indicating that the main component of the crude oil sample is light hydrocarbons. When the sample was further heated from 350 °C to 450 °C, the mass of the sample was gradually reduced. At the temperature of 500 °C, the mass loss of the sample turned out to be 96.18%, which indicated that there was a little amount of hydrocarbons with carbon numbers higher than 35 in the original crude oil ample. Differential Scanning Calorimetry Analysis of crude oil and saturates samples The DSC curves of the crude oil and its saturates component were presented in Fig 2. As can be seen from Fig. 2, as crude oil sample (line N0) was cooled from 50 °C to 22.83 °C, the curve encountered an exothermic peak at 22.83 °C, indicating that the wax crystal formed at the temperature of 22.83 °C (wax appearance temperature (WAT)). Based on the area of the exothermic peak, the released heat of wax crystallization was calculated to be 0.53J/g. After the appearance of the peak, the curve gradually decreased till the temperature reduced to -20 °C. The curve of the saturates sample showed a similar trend to that of the crude oil sample. However, the WAT and heat of the wax crystallization were found to be 24.77 °C and 2.90 J/g, respectively. The WAT of the saturates was 2.16 °C lower than that of the crude oil sample, indicating that the polar materials (e.g., resins, asphaltenes, etc.) in the crude oil sample may have inhibition effect on the appearance of the wax crystals. IR analysis of polar components of crude oil The component with high polarity was separated using the method described in one of our previous works [2] . The polar component was analyzed using an infrared spectroscopy instrument (Nicolet 5700, USA). The IR results were shown in Fig 3. The peak at the range of 3600-3350 cm -1 refers to the hydroxyl stretching vibration. The two peaks at 2600 cm −1 and 2550 cm −1 may be caused by the stretching vibration absorption of the -SH group. The absorption peaks at 1630 cm −1 are exhibited due to the delocalized π bond, the stretching vibration of the C=O bond and amide or bending vibration of the N-N bond. The results show that separated component contains strong polar compounds (e.g., resins, asphaltenes, etc.). In this section, 500 mg/kg of PPD was introduced into the crude oil sample, and then the viscosity of each mixed oil sample with PPD was measured to see check the PPD's performance. As can be seen in Fig 5, the viscosity of the crude oil sample decreased as the temperature increased. However, at 27 °C, the viscosity of crude oil had a breakpoint. When the sample was further increased from 27 °C, no significant reduction in the viscosity can be found. At 30 °C, the viscosity of the crude oil turned out to be 1855 mPa•s. It should be noted that with the presence of 500 mg/kg GO-PPD (or 500 mg/kg EVA), a similar trend of the viscosity-temperature curve to that of the crude oil sample was observed. With the presence of 500 mg/kg GO-PPD and 500 mg/kg EVA, the viscosity of each mixed oil sample was found to be 146 mPa•s and 126 mPa•s (with a reduction rate of 52%), respectively. Therefore, the performance of GO-PPD in reducing the viscosity of crude oil is higher than EVA. In this section, the effect of GO-PPD on the pour point of the crude oil was measured. As can be seen in Table 2, with the presence of 500 mg/kg GO-PPD in the crude oil sample, the pour point of the oil can be lowered to 18.5 °C, which was lower than the pour point temperature (20.2 °C) of crude oil with 500 mg/kg EVA. This result confirmed the effectiveness of GO-PPD in reducing the pour point temperature of crude oil. Wax morphology analysis In this section, the effects of GO-PPD and EVA on the morphology of the wax crystal were examined, which were present in Fig 6. As can be seen in Fig 6(a), without chemical additive, sword-like wax crystals were observed. As the amount of the crystals increased, a threedimensional network of wax crystals was observed. However, with the introduction of EVA in the crude oil, the formed wax crystals were effectively dispersed, and also the amount and size of the crystals were significantly reduced (see Fig 6(b)). When GO-PPD was present in the crude oil sample, it was easy to found that the formed wax crystals were further dispersed, and the sizes of each wax crystals were also smaller than that of crystals in the presence of EVA, indicating that the GO-PPD is a promising PPD that can effectively inhibit the formation of wax and promote the flowability of the crude oil (from Changqing Oilfield). Conclusions In this paper, we have prepared a novel pour point depressant for waxy crude oil in Changqing Oilfield. The performance of GO-PPD was systematically investigated. The results showed that compared with traditional pour point depressant (PPD), the GO-PPD exhibited higher performance in promoting the flowability of waxy crude oil. With the presence of 500 mg/kg GO-PPD in the waxy crude oil, the pour point of which could be reduced by 5.5 °C. Also, with the presence of 500 mg/kg GO-PPD, the viscosity reduction rate of the waxy crude oil can reach 52% at 30 °C. Through the observation via polarized microscopy, we have also found that with the introduction of GO-PPD in the crude oil, the formation of the wax crystals can be greatly retarded. This confirmed that the graphene oxide derivates could also be served as PPD, which facilitates the flowability of certain crude oil (e.g., waxy crude oil from Changqing Oilfield).
2021-12-12T16:51:28.879Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d95f3af667b5d1bf5297dcf54e167bca2474d7d0", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/105/e3sconf_gesd2021_01050.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6b0cc7d78d185a90fe99a5871b33141c8f79be67", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
268197303
pes2o/s2orc
v3-fos-license
A Protocol for Evaluating Digital Technology for Monitoring Sleep and Circadian Rhythms in Older People and People Living with Dementia in the Community Sleep and circadian rhythm disturbance are predictors of poor physical and mental health, including dementia. Long-term digital technology-enabled monitoring of sleep and circadian rhythms in the community has great potential for early diagnosis, monitoring of disease progression, and assessing the effectiveness of interventions. Before novel digital technology-based monitoring can be implemented at scale, its performance and acceptability need to be evaluated and compared to gold-standard methodology in relevant populations. Here, we describe our protocol for the evaluation of novel sleep and circadian technology which we have applied in cognitively intact older adults and are currently using in people living with dementia (PLWD). In this protocol, we test a range of technologies simultaneously at home (7–14 days) and subsequently in a clinical research facility in which gold standard methodology for assessing sleep and circadian physiology is implemented. We emphasize the importance of assessing both nocturnal and diurnal sleep (naps), valid markers of circadian physiology, and that evaluation of technology is best achieved in protocols in which sleep is mildly disturbed and in populations that are relevant to the intended use-case. We provide details on the design, implementation, challenges, and advantages of this protocol, along with examples of datasets. Introduction 1.The Need for Technology to Monitor Sleep and Circadian Rhythms Longitudinally Sleep and the circadian system are important contributors to well-being and both physical and mental health [1][2][3].Disruptions to sleep or circadian rhythms may be a predictor of and/or contributor to disease progression as well having a negative impact on quality of life.Much of our knowledge in this area is based on self-report, crosssectional studies, or short-term laboratory studies.The capacity to unobtrusively monitor sleep-wake cycles and circadian rhythms over long periods of time at home offers the following opportunities: (a) early detection of decline and implementation of appropriate action, (b) monitoring disease progression and associated clinical outcomes, (c) increasing understanding of the relationship between sleep and circadian physiology and clinical symptoms, (d) monitoring the response to interventions.Longitudinal monitoring of sleep and circadian variables within an individual may also facilitate the development of personalised interventions. People living with dementia (PLWD) and their caregivers are examples of populations that may benefit from monitoring of sleep and circadian rhythms over long periods of time.Disturbances of sleep and circadian rhythms are highly prevalent in dementia and include night-time awakening and wandering, long naps during the daytime, and early or late sleep timing [4][5][6][7].Sleep timing disturbances may vary across dementias such as fronto-temporal dementia (FTD) and Alzheimer's disease [8].Sleep disorders are prevalent in dementia, particularly obstructive sleep apnoea and REM sleep behaviour disorder.These sleep disorders are risk factors for dementia and neurodegeneration and contribute to cognitive decline [9][10][11].These disruptions not only affect the quality of life of PLWD but are also a burden to their care givers (e.g., [12][13][14][15]).Sleep and circadian disruption are a major contributing factor to PLWD being moved into care homes (e.g., [16][17][18]).These sleep and circadian disturbances may be a consequence of the neurodegenerative process and, as such, be an indicator of disease progression (e.g., [19]).Sleep disturbances may also drive disease progression and thus be a target for intervention.Some of the symptoms of dementia appear to be very sensitive to sleep disturbance.For example, the night-tonight variation in sleep continuity predicts the day-to-day variation in vigilance, cognition, memory, and behavioural problems in people with Alzheimer's disease [20]. Gold-Standard Assessments of Sleep and Circadian Rhythms: Advantages and Disadvantages Sleep: We can measure sleep in many different ways, from simple self-report (e.g., sleep diaries), to increasing complexity at the behavioural (e.g., bed occupancy) and physiological (e.g., EEG, cardiovascular) level.The classification of vigilance states can be made at the simple sleep vs. wake distinction, at the more detailed macrostructure of the different stages of non-rapid eye movement sleep (NREM) stages N1 to N3 and REM sleep, and finally at the microstructure of the electroencephalogram (EEG) signal, as reflected n power spectral density or other EEG measures.From these measurements a range of parameters to describe sleep can be derived: self-reported sleep quality, the timing of sleep within the 24-h day, total sleep time (TST), sleep onset latency (SOL), wake after sleep onset (WASO), sleep efficiency (SE), spectral power of different EEG frequency bands, and individual EEG events such as slow waves and sleep spindles as well as their phase relationships (e.g., [1,5]). The gold-standard method of assessing sleep is laboratory-based polysomnography (PSG) which is performed in accordance with guidelines of the American Academy of Sleep Medicine [21].PSG is a comprehensive overnight physiological assessment including EEG, electrooculogram (EOG), electromyogram (EMG), electrocardiogram (ECG), oxygen saturation (SpO2), respiration effort and airflow, limb movement (EMG), body position, and video recording.The recordings can then be scored to provide a detailed picture of sleep structure and physiology, including the presence of any clinical sleep disorders such as sleep apnoea and periodic limb movements disorder.To try and reduce the burden to participants and the intensive staff requirement for the acquisition and analysis of PSG recordings, recently there has been a move to develop devices that utilise reduced montages as well as working on improving automated scoring algorithms. Circadian rhythmicity: To understand the contribution of the circadian system to health and disease, it is necessary to be able to characterise its properties, which include the phase (timing) and amplitude (strength) of the rhythms.Traditionally, assessment of phase and amplitude has been achieved by acquiring time series data of gold standard measures (e.g., melatonin, cortisol, core body temperature) in highly controlled laboratory conditions (i.e., dim light, continual wakefulness, controlled posture, controlled calorie intake) [22]. The gold-standard sleep and circadian assessment approaches have a number of drawbacks: (a) high associated cost due to the requirement for participants to be supervised in a laboratory environment by appropriately skilled staff, (b) a high level of burden on participants due to amount of equipment that needs to be worn and needing to travel away from home to a laboratory setting, (c) they are potentially invasive if, for example, blood samples are collected, (d) they require technical analysis skills, e.g., for scoring the PSG recording, (e) they are unrepresentative of normal individual sleep patterns due to first night effects and novel controlled surroundings.Moreover, a single PSG recording and single melatonin profile in the laboratory only provides a snapshot of an individual's sleep/circadian physiology and behaviour. Technology for Monitoring Sleep and Circadian Rhythms at Home: Current and Novel Approaches New digital health technology to monitor sleep/circadian behaviour and physiology at home is rapidly emerging on the consumer and research markets.These devices can potentially provide behavioural level data, including bed occupancy and activity, as well as sleep stages, heart rate, breathing rate, oxygen saturation, and may even quantify sleep apnoea.Some devices also measure environmental variables including light, noise, and air quality.Consumer monitoring devices are designed to appeal to the general public in terms of cost, appearance, and the information that they provide.However, as these are consumer rather than medical devices, no particular level of quantitative performance is mandated or guaranteed.Nonetheless, the ability to cost-effectively monitor sleep and circadian rhythms in an individual's own home offers several advantages.In particular, the assessments are made in a natural environment and longitudinally which allow the influence of daily activities/behaviours and local environmental factors, including light and temperature, on sleep and circadian rhythms to be assessed. Longitudinal assessments of circadian rhythms and sleep, to date, have taken three approaches: (1) measuring rest-activity patterns with wrist-worn actigraphy in conjunction with sleep diary, and using sleep timing as a proxy for the phase of the circadian pacemaker, (2) assessment of circadian phase through sample collection and measurement of melatonin or its metabolites at defined intervals, (3) combining light and activity measurements with mathematical models to predict circadian phase and period and assess the relative contribution of environmental and biological factors to sleep phenotypes [22,23]. Actigraphy records limb movement activity (accelerometery) and then uses a proprietary algorithm to process this movement data to estimate whether an individual is awake or asleep for a defined epoch of time (e.g., 60 s), and subsequently derive sleep measures including TST, SOL and WASO.Importantly, the current guidelines recommend that for actigraphy to provide useful information, it should be combined with a daily sleep diary, which imposes a burden on the participants.In addition to assessment of information on sleep duration and efficiency, non-parametric analysis can be applied to determine a range of variables relevant to sleep regularity and circadian rhythmicity: inter-daily stability (IS), a measure of day-to-day consistency of activity patterns; intra-daily variability (IV), a measure of how much activity varies within a 24-h period; 10 h of highest activity (M10); 5 h of lowest activity (L5); and relative amplitude (M10: L5) [24].However, the use of the timing of sleep/rest periods as an estimate for circadian phase is not advisable [2].This is because the relationship between the circadian clock, as indexed by melatonin, and sleep timing varies in both healthy individuals [25] but also in different mental health conditions [2].Furthermore, the phase relationship between sleep and circadian rhythms is relevant; for example, it predicts whether or not late sleep timing (eveningness) associates with depressive symptoms [3]. Field assessments of the gold-standard marker of circadian phase, i.e., melatonin profiles, are challenged by the fact that melatonin is sensitive to exogenous factors including environmental light and posture.Collection of saliva samples, under dim light whilst seated, at 30 min intervals in the 3-4 h before habitual bedtime allows the dim light melatonin onset (DLMO) to be determined as a marker of circadian phase.Implementation of technology, including containers that track when salivettes are removed for sampling, have allowed the development of home protocols that have been validated against DLMO collected in the laboratory [26].An alternative approach that is less restrictive for participants is 48-h urinary collections to measure the urinary metabolite of melatonin, 6-sulphatoxymelatonin (aMT6s).This methodology has been used successfully in both blind individuals and those living with schizophrenia, who frequently suffer circadian and sleep disruption, to track circadian phase over several weeks (e.g., [27,28]).It should be noted that this approach may cause burden to participants and, as the circadian parameters are computed from a rhythm derived from samples collected over 4-8-h bins, the markers may not have sufficient resolution to detect small but relevant changes in circadian phase.More recently, machine learning, statistical and mathematical models have been used to extract features from only a few samples of high dimensional data, e.g., transcriptomics, metabolomics, or longitudinal simultaneous recordings of light exposure, activity, and physiology [22].For example, the interaction between the circadian system, sleep homeostat, and environmental light exposure approaches have been successfully applied to wearable data to predict circadian phase [29,30].However, many of these approaches have yet to be tested and validated in different populations or under different sleep/wake, light/dark schedules. Evaluating Technology: The Issues The main issues with novel technology are the following: (1) lack of evaluation against gold standard measures, (2) if evaluation studies are performed then they are typically in young, healthy individuals for a habitual time-in-bed period where sleep efficiency is high and where rest/activity rhythms are robust and regular, (3) consumer device hardware and algorithms are constantly being updated which means that any evaluation that has been performed may be rapidly out of date. The predominance of evaluation studies of young participants in laboratory studies means that the device performance may not translate to situations of disturbed sleep/circadian rhythms or in clinical/older populations who may benefit from long-term use of the devices.This is because sleep undergoes well-characterised changes with age, but also in dementia, at both the macro-and microstructure level.In addition to changes in sleep, ageing is associated with changes in the circadian system in terms of its timing, amplitude, and relationship with sleep (e.g., [31]), as well as changes in light exposure, crucial for stability and robustness of the circadian clock, due to changes in photic sensitivity (e.g., [32]) and the lived light environment (e.g., [33]).As such, circadian technologies may not perform as well in older individuals. Nevertheless, there is a lack of evaluation studies in particular in PLWD [34].For example, a recent systematic review of the validity of non-invasive sleep-measuring devices, aimed at assessing their future utility in dementia, was not able to identify any studies in people with mild cognitive impairment or Alzheimer's Disease [35]. The 'International Biomarkers Workshop on Wearables in Sleep and Circadian Science' held at the 2018 SLEEP Meeting of the Associated Professional Sleep Societies identified that the main limitation of large-scale use of novel sleep and circadian wearables is the lack of validation against gold-standard measures [36].The workshop formulated guidelines for validation and confirmed the following: PSG is the only valid reference for TST and sleep staging; PSG sleep records should be scored using current AASM guidelines; PSG sleep records should be double scored to minimise bias.Devices differ in the level at which they classify sleep-wake, from binary (sleep or wake) to four stages (wake (W), REM, light sleep (LS), deep sleep (DS)) to full AASM (W, stage 1 NREM (N1), stage 2 NREM (N2), stage 3 NREM (N3), REM) scoring.The ability of a device to over-or underestimate sleep and wake will depend on its sensitivity (ability to correctly classify sleep epochs), specificity (ability to correctly classify wake epochs), and accuracy (proportion of all epochs correctly detected) (reviewed in [36,37]).These factors will depend on the physiological variables and classification system used to determine sleep and wake.For example, according to traditional performance measures, actigraphy tends to have high sensitivity and accuracy but low specificity [38].Despite this, actigraphy, when combined with a sleep diary, is considered a valuable tool for long-term monitoring of rest/activity patterns in clinical populations [39]. Our Approach to Technology Evaluation Although it is crucial to evaluate the performance of a device against gold-standard measures, perhaps even more important is to assess its performance longitudinally in the real world where it will be used in both older adults and PLWD.Simultaneously, to increase our understanding of sleep and circadian rhythms in the real world and their interaction with disease processes, it is important to monitor relevant environmental variables such as light and temperature that may impact sleep or the circadian system but also aspects of waking function such as alertness, mood, and performance.Most validation studies are limited to healthy participants.Since co-morbidities are highly prevalent in PLWD, validation studies in, for example, cognitively intact older participants should use lenient inclusion/exclusion criteria to make the study more relevant to the intended use case.In addition, it is important to assess the acceptability, scalability and cost-effectiveness of any devices. Here we describe our approach to evaluating novel wearable and contactless sleep, circadian, and environmental monitoring technology, in community-dwelling adults both at home and in the laboratory using multiple devices simultaneously against accepted standard measures.We first applied this approach in cognitively intact older adults to assess the performance and acceptability of a range of devices and have already published some of our findings [23,[40][41][42][43].Here we highlight some of our key findings in relation to the protocol design.We used our initial findings to select technology to assess in PLWD and their caregivers; this feasibility study is ongoing and so here we provide participant demographics to date, alongside example datasets. Selection of Participants For our initial protocol in cognitively intact older adults, the eligibility criteria for participation in the protocol were designed to maximize the plausibility of relevance for home studies in the PLWD population.An essential criterion was that it was safe for the participant to participate in the study.Since co-morbidities are highly prevalent in PLWD, our initial validation study used lenient inclusion/exclusion criteria.Those with stable controlled medical conditions, with the exception of dementia, were included.Since most PLWD are older, the target population consisted of independently living, non-smoking, men and women aged 65-85 years.Some standard exclusion criteria were maintained.Participants had to consume ≤28 units of alcohol per week and be current non-smokers.By using these inclusion/exclusion criteria, our study population was relevant to PLWD and recruitment and, in addition, retention was very successful. Selection of Technology to Evaluate We categorised and evaluated technology according to how it is used or what it is monitoring: (a) wearable devices that are placed on the body (e.g., wrist, head), (b) nearable/contactless devices that are placed near the individual (e.g., bedside or under the mattress) to detect physiological or behavioural signals, (c) environmental monitoring devices (e.g., light, temperature), (d) usable devices that the participant interacts with (e.g., electronic tablet for cognitive testing), (e) video monitoring that provides information about the individual and the environment. The technology assessed in our completed and ongoing studies was selected based on the following criteria: (a) previous evaluation studies, (b) inclusion in comparable studies, (c) regulatory status, (d) cost, (e) potential acceptability to PLWD.We included both research-grade and consumer-directed devices.The technology selection also considered the potential burden on participants.For example, many wearables have limited battery life and may need regular charging during the course of the study.This not only will result in gaps in the data but also the potential for participants to forget to reapply the device after charging.In addition, some devices will need manual downloading whilst others will automatically upload data at the end of the recording to a cloud-based server (see Section 4.1.5for further details on our approach).For all approaches, it is essential to ensure compliance with the general data protection regulations (GDPR). We aimed to evaluate many devices simultaneously.An advantage of this approach is that the performance of a particular device can be compared to not only the standard methodology but also other devices.The number of devices we tested simultaneously took account of potential burden on participants and ensured that adequate signals could still be obtained when multiple devices were worn.Cognitively intact participants wore no more than four wrist devices (two per arm) at any one time, whereas in PLWD no more than two were worn.The technology used in each protocol is described in the Methods section. Study Protocol Our protocol was designed to assess the performance and acceptability of a range of devices, firstly longitudinally at home (7-14 days) with actigraphy combined with a sleep diary as a reference point, and then in an overnight laboratory session with concurrent gold standard video-PSG.The AASM recommends that actigraphy is always accompanied by completion of sleep diaries to allow optimal interpretation of the actigraphic data [44], and that data are collected for a minimum of 72 h to 14 days [39].The concept of the study is shown in Figure 1 and a schematic diagram of the protocol is shown in Figure 2. Further details of the protocol can be found in the Methods section.Importantly, we provided ongoing support to the participants, including in-person training sessions (which were available 24/7) and, for PLWD, going to their homes to set up the technology.This approach assured compliance and a high level of data completeness. Recruitment and Participant Characterisation For our initial protocol in cognitively intact older adults, we contacted n = 729 potentially eligible participants who were registered on the Surrey Clinical Research Facility (CRF) database.From these, n = 177 responded, n = 24 failed the initial telephone screen, and n = 46 were booked for screening visits, with the remaining n = 107 remaining on a waiting list.Of the n = 46 screened, n = 45 were deemed eligible for the study; of these, n = 3 withdrew consent and n = 6 were withdrawn due to the start of the COVID-19 pandemic and face to face research being suspended.We enrolled n = 36 participants into the study and 35 participants (14 female, 21 male) completed the study. Current co-morbid stable medical conditions were reported by 40% of participants.These included disorders of the following systems: endocrine, cardiovascular, gastrointestinal, respiratory, musculoskeletal, and ocular.In addition, 26% of the participants were taking prescribed medications including statins, ACE inhibitors, metformin, and calcium channel blockers. Data Completeness The 35 cognitively intact participants from our initial study collected data for 7-14 days at home which combined to a total of 397 days/nights.We recorded from 6-10 devices (number of nights per device depended upon participant and device), giving a total of 2748 device days/nights from all the devices combined with 95% data completeness.In the laboratory, we recorded 35 nights with 8-10 devices, giving 331 device nights, including PSG, and achieved 98% data completeness.As our feasibility study is ongoing, we cannot yet report data completeness. Device Acceptability Participants completed a single acceptability questionnaire on all of the technology used at home and in the laboratory.For each device, participants were asked to rate comfort (1-very uncomfortable to 7-very comfortable) and ease of use (1-very difficult to 7-very easy), and to record any problems.For our completed study in cognitively intact participants (n = 35), for all of the wearable devices combined, the participants rated comfort as 3.8 ± 1.9 and ease of use as 6.0 ± 1.2.For the nearable devices (n = 17 reported), participants rated comfort as 6.3 ± 1.2 and ease of use as 6.3 ± 1.1. Examples of At-Home and In-Laboratory Recordings One of the strengths of our approach, and of using multiple simultaneous devices, is that it is possible to obtain concurrent physiological, behavioural, and environmental signals both at home and in a laboratory setting.Here we show examples from individual participants of the datasets that were obtained. Figure 3 shows exemplar data from a cognitively intact male participant in his 60s who had moderate sleep apnoea (AHI = 24.1)and was living with controlled type 2 diabetes.The raster plot includes 14 consecutive days of home recording followed by the overnight in-laboratory session, simultaneously using two contactless, nearable technologies (WSA and EMFIT) and a wrist worn actigraphy device (actigraphy was not used in the laboratory) with completion of a subjective sleep diary.The raster plot provides a measure of sleep behaviour and shows the day-to-day variation in sleep timing.The two under-mattress, contactless devices accurately detect bed presence as demonstrated by their concordance with the information captured in the sleep diary [41].These devices can also capture daytime naps taken in bed and generate automated sleep summaries, without using information from a sleep diary (boxed regions in the raster).Figure 4 provides an example of data captured in the laboratory session where polysomnography was recorded for 10 h with simultaneous use of three nearable devices.We implemented a 10 h time-in-bed period to induce a lower sleep efficiency, which allows for a better evaluation of the device's ability to correctly classify both sleep and wakefulness.For each nearable, discrepancy between the device and manually scored PSG is indicated with darker coloured bands at three different levels of sleep stage classification: (1) wake vs. sleep, (2) wake vs. NREM vs. REM, (3) wake vs. deep sleep vs. light sleep vs. REM.Some of the devices already 'detected' sleep before lights-off or after lights-on.Please note that in many laboratory evaluation studies, the device performance is assessed only during the lights-off phase.Since in the real world, the lights-off-lights-on information is in most cases not available, it is important to evaluate their performance over the entire in-bed period, and not just during the lights-off period.For all three nearable devices, it can be seen that as the resolution of the sleep staging increases, so do the number of discordant epochs.Thus, while the nearable devices might be able to distinguish to some extent between wake and sleep, this is not accurate as they identify epochs of sleep before lights-off/after lights-on when the participant is awake.They also cannot accurately distinguish between different stages of sleep. Examples of At-Home and In-Laboratory Recordings One of the strengths of our approach, and of using multiple simultaneous devices, is that it is possible to obtain concurrent physiological, behavioural, and environmental signals both at home and in a laboratory setting.Here we show examples from individual participants of the datasets that were obtained. Figure 3 shows exemplar data from a cognitively intact male participant in his 60s who had moderate sleep apnoea (AHI = 24.1)and was living with controlled type 2 diabetes.The raster plot includes 14 consecutive days of home recording followed by the overnight in-laboratory session, simultaneously using two contactless, nearable technologies (WSA and EMFIT) and a wrist worn actigraphy device (actigraphy was not used in the laboratory) with completion of a subjective sleep diary.The raster plot provides a measure of sleep behaviour and shows the day-to-day variation in sleep timing.The two under-mattress, contactless devices accurately detect bed presence as demonstrated by their concordance with the information captured in the sleep diary [41].These devices can also capture daytime naps taken in bed and generate automated sleep summaries, without using information from a sleep diary (boxed regions in the raster).Figure 5 provides an example of heart rate and breathing rate captured from PSG and three nearable devices from a single participant during a 10 h period in bed in the laboratory.The pattern observed in the vital signs with Nearable 1 matches that detected by the gold-standard PSG signals, whereas Nearable 2 recorded some spikes in heart rate that are not seen in the PSG or Nearable 1. Nearable 3 only record breathing rate, and the signal was consistent across the course of the night.Figure 5 provides an example of heart rate and breathing rate captured from PSG and three nearable devices from a single participant during a 10 h period in bed in the laboratory.The pattern observed in the vital signs with Nearable 1 matches that detected by the gold-standard PSG signals, whereas Nearable 2 recorded some spikes in heart rate that are not seen in the PSG or Nearable 1. Nearable 3 only record breathing rate, and the signal was consistent across the course of the night. Figure 6 shows multiple consecutive days of recording in a woman living with dementia in her 70s and her partner, a man in his 80s, who share a bed.The PLWD's sleep is fragmented and varies from day to day in terms of duration and timing.The partner also experiences nights of discontinuous sleep.Figure 6 shows multiple consecutive days of recording in a woman living with dementia in her 70s and her partner, a man in his 80s, who share a bed.The PLWD's sleep is fragmented and varies from day to day in terms of duration and timing.The partner also experiences nights of discontinuous sleep.Figure 7 provides a visual representation of the difference in the device performance for standard sleep measures and allows effective comparison.For example, if sleep onset latency is of primary importance to a protocol, use of Nearable 3 would be recommended since it provides a more accurate estimate compared to other devices, while for sleep efficiency estimation, the use of a wrist-worn actiwatch would provide similar or better accuracy compared to a nearable.Figure 8 is an example of light exposure data for a 24 h period for an individual from two worn devices, as well as the light levels in the room in their house in which they spent the majority of their time.For the 17 participants who wore both the actiwatch and HOBO, there was a significant correlation between the values measured by the two devices (r = 0.36, p < 0.001).For this individual, their morning light exposure was up to 10,000 lux and exceeded the light levels in their home, suggesting they were outside.Their afternoon light exposure varied between 10 and 1000 lux but is generally lower than the light levels measured in their house, suggesting they were indoors.Their evening light exposure was quite low and generally <100 lux. Figure 7 provides a visual representation of the difference in the device performance for standard sleep measures and allows effective comparison.For example, if sleep onset latency is of primary importance to a protocol, use of Nearable 3 would be recommended since it provides a more accurate estimate compared to other devices, while for sleep efficiency estimation, the use of a wrist-worn actiwatch would provide similar or better accuracy compared to a nearable.Figure 7 provides a visual representation of the difference in the device performance for standard sleep measures and allows effective comparison.For example, if sleep onset latency is of primary importance to a protocol, use of Nearable 3 would be recommended since it provides a more accurate estimate compared to other devices, while for sleep efficiency estimation, the use of a wrist-worn actiwatch would provide similar or better accuracy compared to a nearable.exceeded the light levels in their home, suggesting they were outside.Their afternoon light exposure varied between 10 and 1000 lux but is generally lower than the light levels measured in their house, suggesting they were indoors.Their evening light exposure was quite low and generally <100 lux. Discussion We have presented our approach for evaluating multiple, concurrent sleep/circadian monitoring technologies both at home and in the lab against accepted standard measures in older people and PLWD.This approach is rather different from published approaches in terms of the population enrolled, the number of devices evaluated simultaneously in one individual, the use of home and lab assessments, and the analysis intervals used for performance evaluation.In particular, our inclusion of a heterogeneous population for this age range is in contrast to the stringent criteria, in relation to health conditions and medication, applied for most clinical trials.Our high level of data completeness (95-98%) and participant retention (97%) is an indication of the success of our approach.We evaluated device performance (Sections 4.1.6-4.1.8)over an extended period in bed to ensure that we included both sleep and quiet wakefulness.This ensured that we can determine how well the device performs in people with disturbed sleep and poor sleep efficiency, which is a more relevant use-case for many health conditions.We have previously reported some of our findings [40]. The majority of previous evaluation studies have taken the approach of either validating a single device against gold standard, or assessing multiple devices (but not simultaneously) in the same individual.For example, Chinoy and colleagues took a similar approach to us in simultaneously comparing seven consumer sleep tracking devices (wearable and contactless) in young participants at home and in the laboratory, but participants only used a subset of the devices [37].A strength of our design is that participants utilised multiple devices simultaneously to maximise evaluations and comparisons. The predominant inclusion of young and healthy individuals in published device evaluation studies limits the applications of the findings [37, [46][47][48].In addition, previous studies only evaluated the performance of a device over a lights-off period selected by the participants, meaning sleep efficiency is high and so it is not possible to evaluate the ability of the device to discriminate quiet wake (e.g., [46,47]).Indeed, in a young healthy population, it was demonstrated that the performance of both wearable and contactless devices worsened on a night of sleep disruption (when sleep efficiency is poor) compared to an undisturbed night in the laboratory [37]. The value of assessing the device in the population in which it will be used was highlighted by three recent studies.An assessment of the EMFIT-QS mattress sensor in participants from a sleep disorders centre with a BMI of 33.8 ± 8.3 kg/m 2 (mean ± SD; range 21. 4-46.6)revealed that this device overestimated TST and underestimated WASO [49].However, the authors noted that the performance actually worsened in those with a high apnoea-hypopnea index (AHI) and more fragmented sleep, but that estimations of TST actually improved in participants with increased weight and BMI, suggesting that the device performs better with bigger movements from heavier individuals [47].The Withings Sleep Mattress Analyser (WSA) was assessed in participants with suspected sleep apnoea, and the device similarly overestimated sleep and underestimated wake, but could accurately detect moderate to severe sleep apnoea (AHI was 31.2 ± 25 and 32.8 ± 29.9 with PSG and WSA, respectively), highlighting its diagnostic and monitoring value [50].Finally, 293 cognitively normal and mildly impaired older adults were monitored for up to six nights at home using single-channel EEG (scEEG), actigraphy, and sleep diaries [51].Estimates of TST showed the greatest agreement across all methods, particularly for cognitively intact adults, but the agreement between actigraphy and scEEG decreased in those with mild cognitive impairment and biomarker evidence of Alzheimer's Disease.These studies and our approach emphasize that it is crucial that the device is evaluated in a relevant population for its intended use. The devices we are testing do have potential for long-term use in the home environment.The Withings Sleep Analyser has been deployed in PLWD where variation in night-time behaviour and physiology was shown to relate to disease progression, comorbid illnesses, and changes in medication [52]. The approach we describe here could be applied to evaluate the performance and acceptability of any novel sleep or circadian monitoring device in any population.Combining information from multiple devices can assist with interpreting sleep-wake behaviour as well as allowing their performance to be cross-validated.For example, the AWS provides information about activity levels but not about whether the participant is in bed; however, when the AWS data is viewed in conjunction with the Withings Mattress data, it is possible to identify when the participant has left the bed rather than just being restless in the bed.This is particularly relevant to PLWD whose nocturnal wandering is a major reason for them to be moved from their home into a care home. One challenge of the protocol was that many participants had not used smart technology previously, and we required participants to complete a number of procedures independently, which caused some PLWD to express concern about whether they would remember.Thorough training sessions, provision of comprehensive written instructions, the role of the study partner for PLWD, and frequent contact between researchers and participants, ensured that they felt supported and able to carry out all study procedures.We note that the PLWD included in our study were experiencing mild Alzheimer's.The performance and acceptability of devices in more advanced stages of Alzheimer's remains to be addressed.Contactless monitoring devices with very low or 'nil' user burden, such as under-the-mattress devices, are more likely to be useful in these populations. In conclusion, our protocol allows multiple sleep/circadian/environmental technologies to be assessed simultaneously in an individual both at-home and in the laboratory.Our approach was successful in terms of data quality, data completeness, and gaining an understanding of device acceptability.The protocol was conducted in both cognitively intact older adults and PLWD to provide a comprehensive picture of an individual's behaviour, physiology, and environment. Study Conduct The protocols were guided by the principles of Good Clinical Practice.All participants were compensated for their time and inconvenience.Within the participant information sheet, it was clearly stated that all personal data were handled in accordance with the general data protection regulations (GDPR) and the UK Data Protection Act 2018.In addition, it was explained that anonymised non-personal data may be transferred to the manufacturers of the devices being tested if they need to process the data.The manufacturer can only access anonymised data that details the serial number of the device, the date of recording, and the signals recorded on that specific date.The participants consented to the manufacturers using their anonymised data in the continual assessment and improvement of the performance of the device. Participants For our protocol in PLWD and their caregivers, the inclusion/exclusion criteria for PLWD included the following: age range of 50-85 years, a confirmed diagnosis of prodromal or mild Alzheimer's disease, an S-MMSE (Standardised Mini Mental State Examination [53]) score > 23, living in the community, and, if taking medication for dementia, being on a stable dose for at least three months prior to recruitment.Individuals who had an unstable mental state, severe sensory impairment, active suicidal ideation, or being treated for terminal illness were excluded.PLWD could participate in the study by themselves, or their carer/family support/friend could also enrol as a 'study partner' participant.These study partners had to be > 18 years of age, have an S-MMSE score > 27, and must have known the PLWD for at least six months and be able to support them in their participation.Study partners completed the same procedures as the PLWD. Cognitively intact older adults were recruited via our Clinical Research Facility database, where potential participants have registered and consented to be contacted about ongoing research.PLWD and their study partners are recruited in collaboration with local NHS trusts via memory services.All participants underwent an initial telephone health screening and subsequent in-person screening visit to determine their eligibility to take part in the study. At the screening visit for cognitively intact older adults, following informed consent, participants completed a range of assessments including measurement of height, weight, and vital signs (body temperature, heart rate, respiration rate, and blood pressure), selfreported medical history, and completion of baseline questionnaires: Epworth Sleepiness Scale (ESS) [54] (>10 indicates excessive daytime sleepiness), Pittsburgh Sleep Quality Index (PSQI) [55] (>5 indicates a sleep disorder), Activities of Daily Living Questionnaire (ADL) [56], and International Consultation on Incontinence Questionnaire-Urinary Incontinence (ICIQ-UI) [57]. At the screening visit for PLWD (and study partners, where applicable), and following informed consent, the participants completed standard assessment tools which are frequently used in this population (i.e., sMMSE [56], Hospital Anxiety and Depression Scale (HADS) [58], quality of life in Alzheimer's Disease (QoL-AD) [59], as well as the PSQI [55], and medical history questionnaire).In addition, vital signs were recorded as well as height, weight, and BMI.PLWD and their study partners also completed additional questionnaires either at this visit or during their overnight sessions: ESS, ADL, ICIQ, National adult reading test (NART) [60], Berlin questionnaire to assess for sleep apnoea [61], Horne-Ostberg questionnaire [62] to assess time of day preference, and their education level was documented.For all participants, their general practitioner was informed of their participation. Longitudinal Monitoring At-Home Participants were provided with a range of technology to use in their home to monitor their sleep/wake patterns and environmental light exposure (see Table 1 for list of devices and variables measured).The technology was either installed by the participants themselves or researchers went to the participants' homes to assist them.The wristworn/collarbone-worn devices were worn continually and participants were requested to complete a log whenever they removed them to record times and reason for removal.The EEG wearables were only used for one or two nights and the nearables were left in situ throughout.Participants were requested to complete a modified version of the Consensus Sleep Diary-M [63] (electronically or on paper) on a daily basis to record subjective information about their sleep patterns, sleep quality, daytime napping, as well as alcohol and caffeine consumption.In addition to the standard questions, participants were asked to provide further details about their daytime naps (what time, duration, where and why they napped) and nocturnal awakenings (for each awakening, what time they awoke, how long it took to fall asleep, if they left the bed, and if so, at what time).Participants were requested to complete cognitive assessments one to two hours after waking each day on an electronic tablet.Somnomedics video camera (SOMNOmedics GmbHTM, Randersacker, Germany). Overnight Laboratory Session The session was ~24 h in duration and participants were required to arrive in the afternoon and remain at the Research Centre, which hosts the UKDRI clinical research facility at Surrey, until the following day.Upon arrival, participants' vital signs were measured, and continued eligibility assessed.The devices that were used at home were downloaded, reset, and returned to the participants with any additional devices only used in the laboratory.During their stay, participants' gait and postural stability were assessed using video and radar technology. During the laboratory session, participants had an indwelling cannula sited for collection of regular blood samples at three-hourly intervals (including overnight) for 24 h to assess time of day variation in biomarkers.The samples were processed and were analysed for levels of melatonin, as a gold-standard marker of the circadian clock, as well as biomarkers of dementia e.g., neurofilament light (NfL), phosphorylated tau (p-tau), amyloid-beta (AB40 and AB42; e.g., [65][66][67]).In addition, participants collected urine for 24 h in four-hourly intervals (eight hours overnight) for measurement of aMT6s. Following dinner, participants were equipped with all the electrodes and sensors required for a clinical video-polysomnographic (PSG) recording using AASM compliant equipment and montage.The PSG equipment was the Somnomedics SomnoHD system with Domino software (v 3.0.0.6, sampled at 256 Hz; SOMNOmedics GmbHTM, Randersacker, Germany), and we used an American Academy of Sleep Medicine (AASM) standard adult montage.Participants could also have a wearable EEG device fitted for concurrent EEG recording, and contactless sensors were positioned for overnight recordings.Prior to the start of the PSG recording, participants were asked to lie on the bed in different supine poses (prone, supine, right, left, seated) and recordings were made with video and radar technology to assess the ability of the radar technology to assess physiology in different poses. The protocol takes advantage of the 'first night effect' and an extended period in bed to create a model for mildly disturbed sleep [68].Participants were required to be in bed for a 10 h recording period that was determined on the basis of their habitual time in bed period (HTiBP).For example, for HTiBP < 8 h, the recording period started one hour earlier than habitual bedtime; for HTiBP > 10 h, the recording started at habitual bedtime.For those with 8 h ≤ HTiBP ≤ 10 h the recording start time was determined as Habitual Bedtime-[0.5 × (10 − HTiBP)].This extended period in bed was used to ensure that the recordings included periods of quiet, recumbent wake to determine if the devices could distinguish quiet wake from sleep.Participants selected their own lights off/on times and around these times were permitted to conduct quiet, sedentary activities, e.g., watching movies, reading.Overnight recordings were performed in individual, environmentally controlled bedrooms or in our bespoke bedroom facility that has double occupancy or adjacent room access for PLWD and their study partners. Upon awakening, participants were requested to complete their sleep diary and cognitive test battery as well as a questionnaire about device acceptability.Prior to discharge, vital signs were taken and, for cognitively intact older adults, the S-MMSE was administered. Device and Data Management The flow of data acquisition and data management is shown in Figure 9. Device Allocation Eligible participants were enrolled into the study and had a set of devices allocated to them.All devices and systems had unique identifiers, with a one-to-one allocation as follows: (a) device code to participant for wearable/contactless devices used at home or in the lab, and (b) device to location to participant for devices installed in the laboratory (e.g., floor sensor data is recorded for a specific lab room, and the room is allocated to the participant).All devices were mapped to an operation schedule, which means that data were collected during each 24 h period from 12 noon to 12 noon the following day, unless the device had continuous recording.The flow of data acquisition and data management is shown in Figure 9. Device Allocation Eligible participants were enrolled into the study and had a set of devices allocated to them.All devices and systems had unique identifiers, with a one-to-one allocation as follows: (a) device code to participant for wearable/contactless devices used at home or in the lab, and (b) device to location to participant for devices installed in the laboratory (e.g., floor sensor data is recorded for a specific lab room, and the room is allocated to the participant).All devices were mapped to an operation schedule, which means that data were collected during each 24 h period from 12 noon to 12 noon the following day, unless the device had continuous recording. Device Set-up and Synchronisation To be able to directly compare the performance of different devices, it is essential that they are time synchronised.All network device clocks were synchronised to a Network Time Protocol (NTP) server.Commercial standalone systems were synchronised through the respective software applications used to set up the device recordings. Data Acquisition The devices used were either battery-powered logging devices or were directly connected to power, and either stored data locally or were wi-fi enabled and transmitted data to the secure cloud servers of the manufacturers.Participants were provided with an independent Wi-Fi 4G gateway for device connection. Upon arrival at the laboratory, all devices were collected from participants for the following: (a) download of data and confirmation of power levels for battery powered devices, (b) reconfiguration of connected devices to connect to local Wi-Fi connections.Devices to be used during the laboratory session were returned to the participants. At the end of the lab session, all logging devices were connected to the relevant secure system and source data files extracted and moved to a location based on the participant, day(s), and device for that data.For the online server-based systems, these were synchronised, and data were then extracted and placed into the relevant source data file system. Data Mapping and File Name Convention All data recorded as a source file, or in a source system, were mapped to a named file and location based on study-specific parameters, e.g., Study Name (required), Device code (required), Test/Data/Measure (Optional), Participant or group (required), Visit (required), Study day/night (required).Access to this Research Data Store (RDS) was strictly Device Set-Up and Synchronisation To be able to directly compare the performance of different devices, it is essential that they are time synchronised.All network device clocks were synchronised to a Network Time Protocol (NTP) server.Commercial standalone systems were synchronised through the respective software applications used to set up the device recordings. Data Acquisition The devices used were either battery-powered logging devices or were directly connected to power, and either stored data locally or were wi-fi enabled and transmitted data to the secure cloud servers of the manufacturers.Participants were provided with an independent Wi-Fi 4G gateway for device connection. Upon arrival at the laboratory, all devices were collected from participants for the following: (a) download of data and confirmation of power levels for battery powered devices, (b) reconfiguration of connected devices to connect to local Wi-Fi connections.Devices to be used during the laboratory session were returned to the participants. At the end of the lab session, all logging devices were connected to the relevant secure system and source data files extracted and moved to a location based on the participant, day(s), and device for that data.For the online server-based systems, these were synchronised, and data were then extracted and placed into the relevant source data file system. Data Mapping and File Name Convention All data recorded as a source file, or in a source system, were mapped to a named file and location based on study-specific parameters, e.g., Study Name (required), Device code (required), Test/Data/Measure (Optional), Participant or group (required), Visit (required), Study day/night (required).Access to this Research Data Store (RDS) was strictly controlled in accordance with Information Governance procedures for the specific use of the study protocol owners for analysis. Data Processing and Analysis To evaluate the accuracy and reliability of the focus technology (devices being evaluated) in measuring sleep, we compared it to a standard reference technology.For the at-home recordings, the comparative standard measure against which all other technologies were evaluated was the combination of the Actiwatch-spectrum (AWS) and Consensus Sleep Diary (cognitively intact adults), or AX3 and Consensus Sleep Diary (PLWD and study partners).For the in-laboratory session, the PSG is considered the gold-standard measure [21, 39,69] for all participants.The PSG recordings were scored in 30 s epochs (in accordance with AASM guidelines) by two independent scorers, and a consensus hypnogram was generated.AHI was determined using the AASM criteria for scoring apnoea/hypopneas where there is > 3% drop in oxygen saturation and/or an arousal. The focus technology was evaluated against the standard reference technology (AWS + sleep diary or PSG) for its ability to estimate sleep summary measures (e.g., TST, SOL).In addition, in the laboratory, the focus technology was assessed for its epochby-epoch (EBE) concordance with the PSG hypnogram.Sleep summary measures were obtained from processing data using local individual proprietary software for battery logging devices, or via raw signals being uploaded to a cloud-based server for scoring by a proprietary machine-learning based algorithm.Our approach to data analysis is depicted in Figure 10. Sleep Summary Measures The sleep summary measures can be grouped into two categories: (1) sleep/wake measures, e.g., TST, SOL, WASO, and SE, and (2) sleep stage duration measures.The sleep stage duration measures vary depending on the level at which the focus device classifies sleep-wake, i.e., binary, four stages, or full AASM.The interval over which the sleep summary measures were calculated (analysis period) was either automatically set by the device algorithm, or could be manually set using either the sleep diary reported times of attempted sleep and final awakening (standard for at-home recording) or the lights off period or total recording period (standard for in-laboratory recording) [21,39].The analysis period chosen can have a substantial impact on the summary measures calculated and Sleep Summary Measures The sleep summary measures can be grouped into two categories: (1) sleep/wake measures, e.g., TST, SOL, WASO, and SE, and (2) sleep stage duration measures.The sleep stage duration measures vary depending on the level at which the focus device classifies sleep-wake, i.e., binary, four stages, or full AASM.The interval over which the sleep summary measures were calculated (analysis period) was either automatically set by the device algorithm, or could be manually set using either the sleep diary reported times of attempted sleep and final awakening (standard for at-home recording) or the lights off period or total recording period (standard for in-laboratory recording) [21,39].The analysis period chosen can have a substantial impact on the summary measures calculated and the performance of the device when compared to home/laboratory standards [40,41]. All of the focus technologies generated summary measures automatically using the device algorithm-determined analysis period.The primary analysis was performed using these automatic summary measure estimates.However, for completeness, the summary measures could also be manually calculated. A number of data visualisations (scatter plots, box plots, QQ plots, etc.) were performed to check the distribution of the data and for the presence of outliers.Further statistical tests were performed to check the normality of the data.For the agreement estimation, Bland-Altman analysis was performed, and bias, limits of agreement, and minimum detectable change were estimated.Other metrics that could be computed included, for example, Pearson's correlation, consistency intraclass class correlation (ICC), effect size (Cohen's D), and mean absolute percentage error (MAPE) [70,71].To rank the devices, the agreement matrices containing the sleep measure accuracy metrics were created. Epoch-By-Epoch (EBE) Concordance At-home recordings: the resolution of the focus technology's hypnogram was reduced to binary sleep/wake classification to match the AWS which was used as the comparative standard measure.The analysis window was set between 18:00 h and 12:00 h, and all common periods of sleep/wake timeseries were evaluated. In-laboratory recordings: the PSG hypnogram resolution was reduced to match the levels of sleep stage output by the device (e.g., N1 + N2 = light sleep (LS) and N3 = deep sleep (DS)) to allow direct comparison.Only valid pairs of epochs between PSG and device were used for the concordance analysis.The analysis window was set as the total recording period (~10 h).The EBE concordance metrics for the devices were estimated from the confusion matrices constructed.The concordance metrics used for the analysis of all the different sleep stage levels included sensitivity, specificity, accuracy, Matthew's correlation coefficient (MCC), and F1 score.Similar to the sleep summary measures analysis, an agreement matrix for EBE concordance was created using MCC.MCC is preferred to other concordance metrics since it accounts for class imbalance commonly encountered in hypnogram data and is a better alternative to the metrics such as kappa or its variants [72].The final device ranking was created using the summary sleep measures and EBE agreement matrix.Furthermore, the effect of participant characteristics such as age, sex, BMI, AHI, and other confounding factors on the device accuracy and reliability were also explored. Environmental Measures Characterising the environment, in particular light, is crucial to understanding sleep/ circadian physiology in the real world and in different disease states [73,74].In the current studies, light exposure patterns were assessed by both static and worn devices which recorded lux values at one-minute intervals.One wearable (which measures white, red, green and blue light) was worn on the wrist and one was clipped on clothing near the collarbone; the static device was placed in the room of the home where the participant spent the majority of their time.During data visualisation, imputation was performed for any periods during which the participants were awake but the measured light levels were zero lux, which could be due to the sensors being accidentally covered.The imputation consisted of replacing these zero values with the median value from the preceding and succeeding 30 min.This was calculated across all available days of data for each participant separately.The consistency of measurements between devices was assessed by performing correlations between the lux values obtained by the wrist-worn and collarbone-worn devices. Quality Assurance and Mitigating Issues Troubleshooting To maximise data completeness and quality, during the study, data acquisition from wi-fi enabled devices was monitored daily and participants could be contacted if any issues arise.Participants were also able to contact the researchers 24/7 with any issues or concerns.Some potential issues that may arise and their mitigations are presented in Table 2. Table 2. Device evaluation studies: potential issues and mitigations. Potential Issue Mitigation Device synchronization: Devices may not be time synchronized if they were set up/downloaded/analysed on different systems.This could be due to the fact that some devices will use timestamps on local machines whereas others use UTC.This has previously been identified as being critical for epoch-by-epoch analysis [36]. Possible solutions could be using a physiological signal, e.g., eye blinks or moving the wrist, as a synchronizing signal for cross correlation. Missing data: This could occur due to equipment malfunction, user error, data signal loss, data storage insufficiency, or user error (e.g., wearing the device incorrectly, not using the device when required, forgetting to update apps, forgetting to enter data, unplugging or obstructing nearable devices). Potential mitigations include: (a) ensure participants are thoroughly trained in the use of all equipment and provide instructions to take home, (b) where possible, remotely monitor data acquisition and follow up if needed, (c) test all equipment before use. Expertise Needed to Implement the Protocol These studies require a team of trained personnel to ensure participant safety and wellbeing as well as data quality and integrity, including troubleshooting issues with devices.This level of support is required from the point of consent, throughout the at-home data collection, and for the overnight laboratory session.In addition, specialised and competent staff are required for collecting blood samples, PSG instrumentation, PSG recordings, PSG scoring, and data analysis. Clocks&Sleep 2024, 6 , 7 Figure 1 . Figure 1.Overview of the concept of the protocol and categories of the devices included, the data and application domain, and the study design. Figure 1 . Figure 1.Overview of the concept of the protocol and categories of the devices included, the data and application domain, and the study design. Figure 1 . Figure 1.Overview of the concept of the protocol and categories of the devices included, the data and application domain, and the study design. Figure 2 . Figure 2. Schematic diagram of the protocol.The people symbol indicates when the telephone prescreening assessment was conducted.The clipboard symbol indicates completion of questionnaires, the heart symbol when vital signs were measured, and the battery symbol indicates when participants were trained to use the technology.Grey bars indicate sleep periods and white bars indicate wake periods.The black horizontal line indicates the use of wearables and nearables throughout the at-home period.The pink horizontal lines indicate when an EEG device was used at home to measure sleep physiology.The symbol of a hand using an electronic tablet indicates completion of the cognitive test battery, and the questionnaire symbol indicates completion of the sleep diary. Figure 2 . Figure 2. Schematic diagram of the protocol.The people symbol indicates when the telephone prescreening assessment was conducted.The clipboard symbol indicates completion of questionnaires, the heart symbol when vital signs were measured, and the battery symbol indicates when participants were trained to use the technology.Grey bars indicate sleep periods and white bars indicate wake periods.The black horizontal line indicates the use of wearables and nearables throughout the at-home period.The pink horizontal lines indicate when an EEG device was used at home to measure sleep physiology.The symbol of a hand using an electronic tablet indicates completion of the cognitive test battery, and the questionnaire symbol indicates completion of the sleep diary. Figure 3 . Figure 3. Multiple days of at-home recording (days −14 to −1) and a single overnight laboratory session (day 0) in a male participant in his 60s.The grey bars represent when the participant is out of bed and the purple and green bars represent when the participant is in bed as detected by two different 'under the mattress' nearable devices (Nearable 1 = Withings Sleep Analyser, WSA and Nearable 2 = EMFIT-QS, respectively).For Nearable 1, the light pink represents periods of wake, and the darker purple represents sleep; for Nearable 2, the light green represents periods of wake and the darker green represents sleep.The bed entry and bed exit times recorded on the sleep diary are represented by inverted grey and pink triangles, respectively, and the black triangle indicates estimated sleep onset according to the sleep diary.The horizontal magenta lines represent nap times recorded on the sleep diary.The blue bars represent wrist worn actigraphy with dark blue indicating sleep and light blue representing wake. Figure 4 . Figure 4. Hypnograms from a 10-h in-bed period in a laboratory environment in a single participant with simultaneous polysomnography, including video, and three nearable devices (Nearable 1 = Withings Sleep Analyser, Nearable 2 = EMFIT-QS, Nearable 3 = Somnofy) The black vertical dotted line depicts Lights Off and the red vertical dotted line depicts Lights On.The orange horizontal bar represents bed occupancy according to the video, with darker lines indicating when the participant left the bed.For each nearable, the discrepancy between the nearable and the PSG determined sleep is depicted at three different levels of sleep stage classification: (1) wake vs. sleep, (2) wake vs. NREM vs. REM, (3) wake vs. deep sleep vs. light sleep vs. REM.The darker coloured region indicates epochs of discrepancy.BO = bed occupancy, S = Sleep, W = wake, NR = NREM, NP = not present, REM = rapid eye movement sleep, LS = light sleep, DS = deep sleep. Figure 4 . Figure 4. Hypnograms from a 10-h in-bed period in a laboratory environment in a single participant with simultaneous polysomnography, including video, and three nearable devices (Nearable 1 = Withings Sleep Analyser, Nearable 2 = EMFIT-QS, Nearable 3 = Somnofy) The black vertical dotted line depicts Lights Off and the red vertical dotted line depicts Lights On.The orange horizontal bar represents bed occupancy according to the video, with darker lines indicating when the participant left the bed.For each nearable, the discrepancy between the nearable and the PSG determined sleep is depicted at three different levels of sleep stage classification: (1) wake vs. sleep, (2) wake vs. NREM vs. REM, (3) wake vs. deep sleep vs. light sleep vs. REM.The darker coloured region indicates epochs of discrepancy.BO = bed occupancy, S = Sleep, W = wake, NR = NREM, NP = not present, REM = rapid eye movement sleep, LS = light sleep, DS = deep sleep. Figure 5 . Figure 5. Physiological measures during a 10 h in-bed period in a laboratory environment in a single participant with simultaneous recordings of polysomnography, including video, and from three nearable devices.The black vertical dotted line depicts Lights Off and the red vertical dotted line depicts Lights On.The orange horizontal bar represents bed occupancy according to the video, with darker lines indicating when the participant left the bed.For each nearable device, blue lines represent breathing rate and red lines represent heart rate.Within the PSG hypnogram, BO = bed occupancy, A = artefact, W = wake, R = REM sleep, N1 = stage 1 NREM sleep, N2 = stage 2 NREM sleep, N3 = stage 3 NREM sleep.Devices: Nearable 1 = Withings Sleep Analyser, Nearable 2 = EMFIT QS, Nearable 3 = Somnofy. Figure 5 . Figure 5. Physiological measures during a 10 h in-bed period in a laboratory environment in a single participant with simultaneous recordings of polysomnography, including video, and from three nearable devices.The black vertical dotted line depicts Lights Off and the red vertical dotted line depicts Lights On.The orange horizontal bar represents bed occupancy according to the video, with darker lines indicating when the participant left the bed.For each nearable device, blue lines represent breathing rate and red lines represent heart rate.Within the PSG hypnogram, BO = bed occupancy, A = artefact, W = wake, R = REM sleep, N1 = stage 1 NREM sleep, N2 = stage 2 NREM sleep, N3 = stage 3 NREM sleep.Devices: Nearable 1 = Withings Sleep Analyser, Nearable 2 = EMFIT QS, Nearable 3 = Somnofy. Figure 6 . Figure 6.Multiple days of at-home recording (days −14 to −1) and a single overnight laboratory session (day 0) from a PLWD and their partner who share a bed.The white bars represent when the participant is out of bed and the purple and red bars represent when the participant is in bed as detected by two different nearable devices (Nearable 1 = Withings Sleep Analyser (WSA) and Nearable 2 = Somnofy, respectively).The purple bars represent an under-mattress sensor and the red bars a bedside sensor; for both, the darker shading indicates sleep and the light shading indicates wake as determined by the devices.The bed entry and bed exit times recorded on the sleep diary are represented by inverted grey and pink triangles, respectively, and the black triangle indicates estimated sleep onset according to the sleep diary. Figure 6 . Figure 6.Multiple days of at-home recording (days −14 to −1) and a single overnight laboratory session (day 0) from a PLWD and their partner who share a bed.The white bars represent when the participant is out of bed and the purple and red bars represent when the participant is in bed as detected by two different nearable devices (Nearable 1 = Withings Sleep Analyser (WSA) and Nearable 2 = Somnofy, respectively).The purple bars represent an under-mattress sensor and the red bars a bedside sensor; for both, the darker shading indicates sleep and the light shading indicates wake as determined by the devices.The bed entry and bed exit times recorded on the sleep diary are represented by inverted grey and pink triangles, respectively, and the black triangle indicates estimated sleep onset according to the sleep diary. Figure 6 . Figure 6.Multiple days of at-home recording (days −14 to −1) and a single overnight laboratory session (day 0) from a PLWD and their partner who share a bed.The white bars represent when the participant is out of bed and the purple and red bars represent when the participant is in bed as detected by two different nearable devices (Nearable 1 = Withings Sleep Analyser (WSA) and Nearable 2 = Somnofy, respectively).The purple bars represent an under-mattress sensor and the red bars a bedside sensor; for both, the darker shading indicates sleep and the light shading indicates wake as determined by the devices.The bed entry and bed exit times recorded on the sleep diary are represented by inverted grey and pink triangles, respectively, and the black triangle indicates estimated sleep onset according to the sleep diary. Figure 8 . Figure 8.Light exposure data (plotted on a Log scale) measured over a 24 h period for one participant using three devices: a wrist-worn wearable (blue), a collarbone/lapel worn wearable (green), and a static room device (black) (where the participant spends the majority of their time).Grey shaded areas indicate sleep periods reported in the sleep diary, the vertical red dashed line is sunset, and the vertical pink dashed line is sunrise.The scatter plots indicate the relationship between the light measures obtained by the different devices. Figure 8 . Figure 8.Light exposure data (plotted on a Log scale) measured over a 24 h period for one participant using three devices: a wrist-worn wearable (blue), a collarbone/lapel worn wearable (green), and a static room device (black) (where the participant spends the majority of their time).Grey shaded areas indicate sleep periods reported in the sleep diary, the vertical red dashed line is sunset, and the vertical pink dashed line is sunrise.The scatter plots indicate the relationship between the light measures obtained by the different devices. Figure 9 . Figure 9. Overview of data acquisition and data management.The two sides of the dotted line on the diagram represent the participant, environment, and paper documentation during the time of the study and the collection of digital representations of these, respectively, from left to right. Figure 9 . Figure 9. Overview of data acquisition and data management.The two sides of the dotted line on the diagram represent the participant, environment, and paper documentation during the time of the study and the collection of digital representations of these, respectively, from left to right. Figure 10 . Figure 10.Overview of process for evaluating consumer-grade technology against standard device. Figure 10 . Figure 10.Overview of process for evaluating consumer-grade technology against standard device. Table 1 . Commercially available and research-grade devices used in the study protocols.
2024-03-03T17:07:05.383Z
2024-02-29T00:00:00.000
{ "year": 2024, "sha1": "86eed4061dff1b52eb844521b0300cd429faaf4b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2624-5175/6/1/10/pdf?version=1709203620", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f972ec131da5788b1e369171801ff680e4151b46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118324183
pes2o/s2orc
v3-fos-license
CP Violation in \tau ->\nu\pi K_S and D->\pi K_S: The Importance of K_S-K_L Interference The $B$-factories have measured CP asymmetries in the $\tau\to\pi K_S\nu$ and $D\to K_S\pi$ modes. The $K_S$ state is identified by its decay to two pions at a time that is close to the $K_S$ lifetime. Within the Standard Model and many of its extensions, the asymmetries in these modes come from CP violation in $K^0-\bar{K}^0$ mixing. We emphasize that the interference between the amplitudes of intermediate $K_S$ and $K_L$ is as important as the pure $K_S$ amplitude. Consequently, the measured asymmetries depend on the times over which the relevant decay rates are integrated and on features of the experiment. Introduction. The BaBar collaboration has recently announced a measurement of the CP asymmetry in the τ → πK S ν τ decay [1]: (See [2] for related measurements.) The BaBar [3,4], BELLE [5], CLEO [6,7] and FOCUS [8] collaborations have measured the CP asymmetry in the D → πK S decay: where the numerical value is an average over the four measurements. Assuming that direct CP violation in the τ or D decay plays a negligible role, as is the case in the Standard Model and many of its extensions, then the asymmetries (1) and (2) arise from CP violation in K 0 −K 0 mixing [9][10][11]. It is important then to realize two facts: 1. The τ + (τ − ) decay produces initially a K 0 (K 0 ) state, while the D + (D − ) decay produces initially a K 0 (K 0 ) state. (The color and doubly Cabibbo suppressed D + → K 0 π + decay amplitude can be safely neglected.) 2. The intermediate K S -state is not directly observed in the experiments. It is defined via a final π + π − state with m ππ ≈ m K and a time difference between the τ or D decay and the K decay t ≈ τ S , where τ S is the K S lifetime. Thus, in the absence of direct CP violation, the asymmetries depend on the integrated decay times, and we have predicted to have opposite signs, while the experimental measurements (1) and (2) carry the same sign is intriguing. The naive expectation that A τ = −A D is excluded at 3.3σ. In this work, we derive an explicit expression for the A ǫ (t 1 , t 2 ) asymmetry and its dependence on the experimentally known mixing parameters ǫ and ∆m. In doing so, we correct for sign mistakes made in previous literature. We argue that the theoretical prediction depends on t 1 , t 2 and on details of the experiment. Until these subtleties are taken into consideration, it is difficult to asses the significance of A τ = −A D . The experimental parameters. The two neutral K-meson mass eigenstates, |K S of mass m S and width Γ S and |K L of mass m L and width Γ L , are linear combinations of the interaction eigenstates |K 0 (with quark contentsd) and |K 0 (with quark content sd): The average and the difference in mass and width are given by The decay amplitudes into a final state ππ are defined as The relevant CP violating parameters are defined as where in the first approximation we neglected a correction of relative order |ǫ| 2 , and in the second a correction of relative order ǫ ′ /ǫ. We obtain: For the difference and the sum of these rates, we obtain For the sum S ππ (t) of Eq. (10), the interference (and the pure K L ) terms are suppressed by O(ǫ 2 ) compared to the pure K S term. For the difference D ππ (t) of Eq. (9), however, this is not the case. The ratio between the second (interference) and first (non-interference) terms in D ππ (t), is plotted in Fig. 1 as a function of time. In the figure we can observe the following features: 1. Even at very early times, the interference term is not negligible compared to the pure K S term. For example, at t = 0, R = −1. 3. For times early enough that the pure K L term can be neglected (t ≪ 12τ S ), R reaches a minimum at t/τ S ∼ π, R ∼ −e π/2 , and a maximum at t/τ S ∼ 3π, R ∼ +e 3π/2 . Since the CP asymmetry depends on the time at which the kaon decays, the final measurement is sensitive to the experimental cuts. To incorporate these cuts, we need to take into account not only the efficiency as a function of the kaon decay time, but also the kaon energy in the lab frame to account for time dilation. We parametrize all of these experiment-dependent effects by a function F (t) such that t is the time in the kaon rest frame and 0 ≤ F (t) ≤ 1. We emphasize that this function must be determined as part of the experimental analysis. The experimentally measured asymmetry is thus given by the convolution of the bare asymmetry with F : While we do not have the function F (t), it is reasonable to approximate it by a double step function, In this case the experimentally measured asymmetry, A ǫ defined in Eq. (13), coincides with the theoretical one, A ǫ (t 1 , t 2 ) defined in Eq. safely neglect terms of O(ǫ 2 ): Neglecting direct CP violation, we can use the model independent relation [12] Im(ǫ) to obtain A particularly simple result arises when t 1 ≪ τ S and τ S ≪ t 2 ≪ τ L , so that we can take e −ΓSt1 = 1, e −ΓSt2 = 0, and cos(∆mt 1 ) = 1. In addition we use y ≃ −1, and obtain where in the last step we used the experimental value. In Figs. 2 and 3 we investigate the dependence of A ǫ (t 1 , t 2 ) on the choice of t 1 and t 2 . In Fig. 2 we plot A ǫ (t 1 , t 2 )/(2Re(ǫ)) as a function of t 2 for t 1 = τ S /10. In Fig. 3 we plot A ǫ (t 1 , t 2 )/(2Re(ǫ)) as a function of t 1 for t 2 = 10τ S . We emphasize the following points: 1. For t 2 large enough that the e −Γt2 term is negligible, and for t 1 /τ S ≪ 1, we have This linear rise with t 1 , which can be clearly seen in Fig. 2, is a result of "losing" a fraction t 1 /τ S of the time independent pure K S term in the asymmetry. 2. For t 1 fixed and small, A ǫ reaches a maximum at around t 2 = (3π/2)τ S , and then, for higher t 2 , converges to its asymptotic value of Eq. (19). These features can be clearly seen in Fig. 3. The maximum is enhanced by a factor of about (1 + √ 2 exp(−3π/4)) ≈ 1.13 compared to the asymptotic value. Let us comment on previous relevant literature. The idea to measure the CP asymmetry of Eq. (1) was first made in Ref. [9]. In this beautiful work, the importance of the interference term in restoring the CPT constraint is explained. Indeed, the BaBar paper [1] compare their measurement to the prediction given in Eq. (7) of Ref. [9]. We note, however, that both Eq. (6) and Eq. (7) of Ref. [9] have a sign mistake: Both the "pure K L " term and the "pure K S " terms give |q| 2 − |p| 2 ≃ −2Re(ǫ). Yet, when the interference term is taken into account, it approximately reverses the sign of the pure K S result. Correcting the sign of Eq. (7) in [9] and taking into account the interference term combine to approximately reproduce the numerical prediction quoted in this equation. Further analysis of this asymmetry is given in Ref. [10]. Here the fact that the interference term practically reverses the sign of the "pure K S " asymmetry is nicely pointed out, yet several sign mistakes lead to a wrong sign in the final prediction, see their Eq. (14). The idea to measure the CP asymmetry of Eq. (2) was first made in Ref. [11]. The interference term is not discussed in this work. Conclusions. CP asymmetries of O(10 −3 ) in the τ ± → π ± K S ν and D ± → π ± K S decays are predicted within the Standard Model as a result of CP violation in K 0 − K 0 mixing. A violation of the SM predictions would imply direct CP violation in τ and/or D decays. The kaon is identified via final two pions with invariant mass m ππ ∼ m K and decay time t ∼ τ S . In the total decay rate, the contribution of intermediate K S is strongly dominant. In the CP asymmetry, however, the K L − K S interference term is as important as the pure K S term. As a consequence of this situation, the asymmetry depends sensitively on the decay time interval over which it is measured, and on details of the experiment. The exact SM prediction can be obtained only if the relevant experimental features are taken into consideration. Generically, we expect the measured asymmetry to be opposite in sign and larger in magnitude than the asymmetry that would arise from the pure K S contribution. While we focused here on only the two specific examples of D and τ decays, the analysis above applies to any measurement of a CP asymmetry that involves K S in the final state. In particular, similar effects should eventually be taken into account in the determination of the angle γ of the unitarity triangle based on B → DK decays, if the kaon is identified as a K S [13] or if the D decays into a final state with a K S such as with the Dalitz decay D → π + π − K S [14]. Another case where the effect should eventually be included is in the determination of D − D mixing using D decays into K S . In this case our formalism cannot be directly applied, because at t = 0 the kaon state is not a pure K 0 (or K 0 ), and adjustment to such cases is needed. The measured asymmetry in D decay seems very consistent with the SM prediction, while the measured asymmetry in τ decay seems different from the SM prediction by at least 3σ. In view of the potential implications for new, CP violating physics, we urge the experimenters to take into account the subtleties that we point out, and provide not only the measured value of the asymmetry, but also the theoretical prediction which depends on specific experimental features.
2011-10-17T20:00:06.000Z
2011-10-17T00:00:00.000
{ "year": 2012, "sha1": "1a85e2e35a86c33918b00bdd2b9ee5d1d02b247d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1110.3790", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a85e2e35a86c33918b00bdd2b9ee5d1d02b247d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
243020970
pes2o/s2orc
v3-fos-license
AN INTEGRATED APPROACH TO THE DIAGNOSIS OF BACTERIAL AND FUNGAL BLOODSTREAM INFECTIONS IN CANCER PATIENTS Purpose of the study. To evaluate the diagnostic significance of accelerated and affordable verification of a bloodstream infection pathogen using biomarkers: procalcitonin and the Platelia™ Candida Ag Plus mannan antigen. Patients and methods. 349 cancer patients with febrile fever were examined from 6 medical and diagnostic oncological hospitals in the Southern Federal District of the Russian Federation during 2019. Patients aged from 1 to 85 years were hospitalized in intensive care, pediatric oncology and hematology oncology departments. Patient informed consent for the study was obtained. The diagnostic algorithm included: a blood test using an automatic BacT /ALERT 3D analyzer and a parallel study of the level of biomarkers with enzyme immunoassay. Identification of strains and determination of sensitivity to antimicrobial agents was determined on a Vitek 2 automatic analyzer (BioMerieux, France). Procalcitonin levels greater than 10 ng/ml were registered to determine the development of bacterial inflammation. Procalcitonin was determined with Procalcitonin — ELISA-BEST kits (Russia). Mannan antigen was determined using Platelia Candida Ag kits (France). The result was considered positive at the antigen concentration of ≥125 pg/ml. Candida mannan antigen allowed us to decide on the involvement of Candida spp . in the infectious process. Results . An integrated approach to the diagnosis of bloodstream infections increased the percentage of detection of pathogens up to 58.7%. Bacterial infection testing both with the blood culture method and the procalcitonin determination in blood serum revealed similar diagnostic values. Candida mannan antigen testing significantly improved the early diagnosis of Candidal infection, despite negative blood culture, which was probably associated with prolonged cultivation of Candida spp . in the blood (from 2 to 5 days). The inclusion of biomarker testing in the diagnostic algorithm in cases of suspected bloodstream infection allowed early pathogen identification and starting an adequate antibacterial or antifungal therapy. Conclusion . An integrated approach to the diagnosis of bloodstream infections improved and, just as importantly, significantly accelerated the pathogen verification. Bacterial infection cases showed comparable results of hemocul-turing and biomarker testing; however, in case of candidal infection, determination of Candida mannan antigen appears critical, as it was significantly more sensitive than the result of blood culture and allowed to identify the etiology of fever of unknown origin in many patients. diabetes mellitus, complete parenteral nutrition, severe patient condition, steroids, immunosuppressors, acute renal failure, colonization of Candida spp INTRODUCTION Worldwide, cancer is the second leading cause of death and it kills about 8.8 million people every year, according to statistics from the World Health Organization (www.who. Int / mediacentre / factheets / fs297). Severe bloodstream infections and sepsis complicate treatment and the outcome of recovery, antitumor treatment, and have a significant negative impact on the life expectancy and cost of treatment of cancer patients. Rapid diagnosis of sepsis and initiation of treatment are key factors in reducing mortality in cancer patients [1][2][3][4][5]. Infection-mediated mortality during chemo-induced immunosuppression is an urgent issue that requires studying risk factors and developing strategies to reduce mortality by optimizing diagnostic methods and accompanying therapy [6]. In the etiology of bloodstream infection, both gram-positive and gram-negative bacteria are most common. Yeast-like fungi of the genus Candida play a significant role. Using modern equipment to get the pathogen from the blood is still problematic. In addition, it takes from a day to several days. In this situation, biomarkers are an objective and reliable way for a clinician to quickly respond to the possible development of an infectious complication [7][8]. It was discovered by chance that bacterial infection increases the concentration of procalcitonin in the blood. This contributed to the procalcitonin usage as a marker of bacterial infections. In contrast to all known markers of inflammation, the method for determining procalcitonin is more sensitive and highly specific for severe bacterial infection [9][10][11][12][13][14]. Yeast fungi of the genus Candida spp are one of the most common pathogens of invasive mycoses. Diagnosis of invasive candidiasis is difficult due to the non-specific clinical symptoms and insufficient sensitivity of the hemocultivation method. One of the available biomarkers of invasive candidiasis that allow us to judge the presence of invasive candidiasis is mannan, one of the Candida spp. antigens, which is a soluble polysaccharide bound to the walls of yeast cells [15]. So, the development and application of new and improvement of existing methods for determining the pathogen, which allow to accelerate and clarify the etiological factor of bloodstream infection for the earliest possible start of specific treatment, is an urgent task of modern medicine. The purpose of the study: to evaluate the diagnostic significance of accelerated and accessible verification of the bloodstream pathogen using biomarkers: procalcitonin and mannan antigen. PATIENTS AND METHODS 349 cancer patients with febrile fever from 6 medical and diagnostic hospitals of the southern Federal district of the Russian Federation were examined during 2019. Patients (men and women) aged from 1 to 85 years were in intensive care units, pediatric oncology and oncohematology departments. There are informed patient consents for research. The diagnostic algorithm included a blood test using an automatic bact/ALERT 3D analyzer. Two sets of vials were used for one septic episode. Each set included: for patients with a body weight of more than 36 kg: a bottle for aerobic and anaerobic cultivation and a volume of 10 ml of blood in each bottle. For children with a body weight of up to 36 kg (inclusive), two pediatric vials and a volume of blood from 0.5 to 5.0 ml per vial, also depending on body weight. Each blood culture was accompanied by a parallel study of the level of biomarkers by the immune-enzyme method. Identification of strains and their sensitivity to antimicrobial agents was performed using an automatic Vitek 2 analyzer (BioMerieux, France). Simultaneously with the seeding, a sample was taken into a vacuum tube for an enzyme-linked immunoassay (determination of the level of procalcitonin and the mannan antigen Candida spp). Procalcitonin values of more than 10 ng/ml were taken into account to determine the development of bacterial inflammation. The Candida spp. manann antigen, as one of the available biomarkers of invasive candidiasis, allowed us to judge the involvement of Candida spp fungi in the infectious process. The result was considered positive at an Южно-российский онкологический журнал 2020, т.1, №4, с. 15-21 О.Ю.Куцевалова*, Ю.Ю.Козель, В.В.Дмитриева, О.В.Козюк, И.Б.Лысенко / Комплексный подход к диагностике бактериальных и грибковых инфекций кровотока у пациентов онкологического профиля antigen concentration of ≥125 PG/ ml. Given the low specificity of the study of the Candida spp. manann antigen, the results were compared with risk factors for the development of invasive candidiasis (perforation or surgery of the gastrointestinal tract, infected pancreonecrosis, Central venous catheter, broad-spectrum antibiotics, diabetes mellitus, complete parenteral nutrition, severe patient condition, steroids, immunosuppressors, acute renal failure, colonization of Candida spp. more than 2 loci) [16]. The level of procalcitonin was studied using Procalcitonin-ELISA-BEST (Russia). Determination of manann antigen -using kits Platelia Candida Ag (France). Statistical data processing was performed using the statistical package STATISTICA 13.3 (StatSoft Inc., USA). Pearson's Chi-square test was used to compare the data. The results of the study and their discussion As a result of the microbiological study, positive blood cultures were obtained in 84 patients, which was 24.1%. Pathogens were distributed as follows: bacteria made up 73.8% (65 strains), yeast-like fungi of the genus Candida spp. they made up 22.6% (19 isolates). Bacterial-Candida associations were detected in 3 (3.6%) cases in particularly severe patients, which significantly worsened the condition of patients. In a parallel study of biomarkers (procalcitonin and manann antigen Candida spp.), an increased level of one of them was found in 205 (58.7%) patients (Fig.1). Procalcitonin values of 10 ng/ml or more were observed in 68 (33.2%) patients, which indicated in favor of severe bacterial inflammation. A positive result of the Candida spp. manann antigen was found in 118 (57.6%) patients. The results allowed us to suggest Candida infection of the bloodstream in the presence of appropriate clinical signs and risk factors for the development of invasive candidiasis. In 19 (9.2%) patients, two biomarkers were elevated, indicating a possible mixed infection. The results obtained when comparing the informative characteristics of the two methods for diagnosing blood flow infection showed statistically significant indicators (p<0.0001). The results obtained made it possible to optimize treatment tactics and start adequate etiotropic therapy in a timely manner. In a comparative analysis of the study using biomarkers and hemocultivation for bacteria, almost comparable values were obtained: positive hemoculture in 65 (73.8%) patients and 68 (33.2%) patients with procalcitonin levels of 10 ng/ml or more. According to Pearson's Chi2 criterion, the difference was significant, p=0.0001. Slightly different results were obtained for the yeast-like fungi Candida. Candida from the blood was isolated in only 19 (9.2%) patients, at the time, as the level mannopova antigen R. Candida was elevated in 118 (57.6%) patients (according to Pearson's Chi2 criterion, the difference was significant, p=0.0006). Taking into account the clinical manifestations and risk factors, despite the negative result of blood seeding, which was probably due to long-term cultivation of Candida spp. antifungal therapy was prescribed to all patients, with the positive dynamics. Positive results with the use of biomarkers suggested the presence of bacterial-Candida associations in 19 (9.2%) patients, while associations were obtained only in three (according to Pearson's Chi2 criterion, the difference in this case was not significant, p=0.12). All these patients were in the departments of anesthesiology and intensive care after surgery. Their condition was assessed as extremely serious. In this case, the measurement of the mannan anti-gen Candida spp. compared with the risk factors for the development of invasive candidiasis, allowed suggesting an invasive candidiasis infection and prescribing an adequate timely therapy. The data obtained are shown in figure 2. When diagnosing a bacterial pathogen of blood flow, the culture method showed an advantage and amounted to 73.8%. The use of the biomarker in this study was 33.2%. The advantage of the culture method was the identification of the pathogen and the determination of antibiotic sensitivity. However, the culture method for diagnosing the bacterial pathogen was significantly comparable with the results of the study of the level of procalcitonin (p=0.0001). When diagnosing Candida infection, the best result was obtained when using the Candida spp mannan antigen and was 57.6%, and in hemoculture -22.6%, which was a statistically reliable indicator (p=0.0006). The result can be explained by the complexity of fungal cultivation in hemoculture. As a result of these studies, much later, but the pathogen was verified in 10 more patients. All 10 patients had a positive level of biomarkers, of which 4 had an increased level of procalcitonin, and 6 had positive values of the Candida spp mannan antigen. When repeated blood cultures were sown in 6 patients, the growth of hemoculture was also obtained (bacteria were isolated from 4 samples, and Candida from 2 samples). In 26 patients with persistent febrile fever, the condition was regarded as a manifestation of cancer. 24 patients were diagnosed with fever of unknown origin (no infectious agent was detected in additional studies, and no data were found for the progression and recurrence of the cancer process). Thus, the culture method of hemoculture research allowed us to verify the causative agent of blood flow infection in 24.1% of cancer patients during the initial study and with additional repeated blood cultures in 6 more patients. Multiple (dynamic) studies of the level of biomarkers in this diagnosis improved the result to 61.6% with a significant difference of p<0.0001. CONCLUSION In case of bacterial infection of the bloodstream, the culture method showed an advantage in identifying the pathogen and allowed to determine antibiotic sensitivity in comparison with the use of a biomarker, but the determination of procalcitonin allowed to reduce the time for obtaining the result, which is extremely important for determining the direction of the pathogen of bloodstream infection in cancer patients. The use of the mannan antigen Candida spp. in diagnostics demonstrated a significantly higher sensitivity than the result of hemocultivation, which is probably due to the extended period of cultivation of Candida in the blood. An integrated approach with the study of the level of biomarkers in the diagnosis of bloodstream infections in cancer patients allowed to improve and significantly accelerate the verification of the pathogen, which in turn contributed to timely adequate antibacterial or antifungal therapy.
2020-12-03T09:05:41.915Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "57484fdf64445361dce7b2c47d2effceee923fea", "oa_license": "CCBY", "oa_url": "https://www.cancersp.com/jour/article/download/84/46", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "d94139e5fd0ab2eec09c707ab86aa6acf844d9c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
254618501
pes2o/s2orc
v3-fos-license
Karius With a Q: Role for Microbial Cell-Free DNA Next-Generation Sequencing in Diagnosis of Acute Q Fever Abstract The diagnosis of Q fever can be challenging and a high index of suspicion is necessary. Within this case series, we highlight the utility of the microbial cell-free DNA next-generation sequencing or Karius Test in the timely diagnosis and management of acute Q fever. Coxiella burnetii, an obligate intracellular gram-negative coccobacillus, is the causative organism of the zoonotic infection "query" fever or Q fever [1]. The pathogen has several animal reservoirs including cattle, sheep, goats, and horses and can concentrate in animal urine, feces, milk, and placenta. Transmission occurs through occupational exposure (veterinarians, abattoir workers, farmers), environmental exposure downstream of contaminated manure or dust, and blood transfusions [1,2]. The incubation period in acute infection is approximately 20 days and clinical manifestations are variable, ranging from asymptomatic infection to a flu-like illness, pneumonia, hepatitis, meningoencephalitis, and endocarditis [2]. The diagnosis of Q fever is challenging. A high index of suspicion based on history is critical to diagnosis as clinical presentation can be nonspecific and illness course may be self-limited. Furthermore, C burnetii fails to grow on routine cultures and current diagnostic assays have limited sensitivity during the acute phase of illness [3]. Microbiologic evaluation of Q fever typically includes serum polymerase chain reaction (PCR) in the acute phase, immunofluorescence-based serologic testing from acute and convalescent phases (phase I and phase II immunoglobulin M [IgM] and immunoglobulin G [IgG] antibodies), and C burnetii PCR on tissue specimens when available. Recently, microbial cell-free DNA (cfDNA) next-generation sequencing (NGS) has been proposed as a novel tool to diagnose Q fever. Kondo and colleagues utilized microbial cfDNA NGS (Karius Test) to facilitate early diagnosis of C burnetii prosthetic pulmonary valve infective endocarditis with subsequent confirmation by serology and Coxiella PCR following valve explanation [4]. Here we describe a series of cases that highlight the utility of the Karius Test in facilitating a timely diagnosis and management of Q fever. Liver ultrasound was unremarkable and magnetic resonance cholangiopancreatography (MRCP) revealed dilated gallbladder without cholecystitis or biliary duct dilatation and evidence of multiple splenic infarcts, confirmed on computed tomography (CT) of the abdomen/pelvis. Transesophageal echocardiography revealed no evidence of valvular abnormalities. An extensive microbiologic workup revealed positive Q fever serologies with negative Coxiella serum PCR, as well as positive Lyme, Bartonella, and Anaplasma serologies (Table 1 and Figure 1). Concurrently, Karius testing returned positive within 1 week for C burnetii (Figure 1). A liver biopsy demonstrated nonnecrotizing granulomatous inflammation including fibrin ring granulomas, with a positive tissue C burnetii PCR, confirming the diagnosis of severe acute Q fever with positive APLA and hepatitis. The patient was started on a planned 12-month course of therapy with doxycycline and hydroxychloroquine (Table 1). Patient 2 A 45-year-old man with a history of hypertension, alcohol use disorder, obesity, and sarcoidosis presented with a 2-month history of fever, 40-pound weight loss, episodic epigastric abdominal pain, myalgias, and arthralgias. He was admitted at a peripheral hospital 10 days prior with a new diagnosis of T2DM and was noted on CT abdomen/pelvis to have edematous pancreatitis. He then presented to our institution with recurrent fevers and hypotension. Examination was notable for epigastric tenderness. Risk factors included frequent occupational exposure to abattoirs as a food inspector, as well as tick bites while hiking and fishing in the preceding 3 months. MRCP confirmed acute pancreatitis and noted the presence of a 1.6-cm right renal neoplasm. Pertinent infectious studies included positive Q fever serologies, negative Coxiella serum PCR, and positive anaplasma and spotted fever group (IgG 1:128) serologies (Table 1 and Figure 1). The relevance of the latter 2 serologies given his exposure history was unclear. Karius testing was performed at admission and detected C burnetii, consistent with the diagnosis of subacute Q fever ( Figure 1). Transthoracic echocardiography (TTE) was negative for infective endocarditis. The patient was treated with a 2-week course of doxycycline with complete resolution of symptoms. (1.6 mg/dL), and elevated serum inflammatory markers including erythrocyte sedimentation rate (76 mm/1 hour), CRP (336.2 mg/L), and ferritin (3125 µg/L). The patient was admitted for further workup. Exposure history was notable for recent deer hunting in wooded areas. A comprehensive evaluation was remarkable for elevated APLA IgM (>150 microgram of IgM antibody [MPL]) and IgG (137.8 microgram of IgG antibody [GPL]). Positron emission tomography scan was notable for splenomegaly (18 cm) without abnormal uptake. Bone marrow biopsy, peripheral flow cytometry, lymph node biopsy, and TTE were negative. Targeted infectious workup was performed notable for an initial negative Coxiella serum PCR, with subsequent Karius testing returning positive for C burnetii (Table 1 and Figure 1). Q fever serologies subsequently confirmed the diagnosis, prompting initiation of a 12-month course of doxycycline and hydroxychloroquine for treatment of acute Q fever complicated by HLH-like syndrome and positive APLA. Further history indicated residence near a goat farm with direct exposure to employees, prompting an infectious workup along with Karius testing (Table 1 and Figure 1). Prior to finalizing results, empiric doxycycline was initiated with improvement in his fevers and subsequent discharge after stabilization. Following dismissal, Karius testing returned positive for C burnetii, confirming a diagnosis of acute Q fever without evidence of seroconversion. Outpatient TTE was negative for valvular involvement, and antiphospholipid antibody evaluation demonstrated elevated APL IgM (>150 MPL) and IgG (122.3 GPL). Repeat Q fever serologies 2 weeks after initial negative studies confirmed the above diagnosis (Figure 1). He continued doxycycline monotherapy with serologic monitoring in place. DISCUSSION The clinical presentation of Q fever is ambiguous and despite a high degree of suspicion, remains an elusive diagnosis with the use of conventional microbiologic studies. Coxiella burnetii is not routinely isolated in blood cultures. Further complicating the diagnosis of acute and subacute Q fever is the frequently observed temporal delay between symptom onset and subsequent seropositivity [5]. Despite limited availability, additional testing including C burnetii PCR can aid in diagnosis in this window period with higher sensitivity noted on tissue specimens as compared to blood. However, the sensitivity is suboptimal for acute Q fever, highlighting the need for additional diagnostic methods to identify C burnetii in a timely fashion [3]. Multiple clinical applications of Karius testing have been previously demonstrated [6][7][8], but the utility of microbial cfDNA NGS in the diagnostic evaluation of Q fever remains uncertain [4]. We highlight several auxiliary roles for cfDNA NGS in confirming the microbiologic diagnosis of Q fever. First, as noted with patients 1 and 2, serologic cross-reactivity between C burnetii and alternate infectious diagnoses including Legionella and Bartonella, as well as potential coinfections, has been well described [5]. Karius testing was useful in confirming the presence of C burnetii DNA in the setting of seropositivity and reconciling serologic cross-reactivity versus true coinfection. While false-negative results can be noted with Karius testing, lack of detection of coinfection with pathogens like Borrelia burgdorferi and Bartonella henselae in combination with absence of clinical constellation of symptoms avoided further unnecessary workup for additional pathogens. Second, cfDNA NGS, as employed in the relevant clinical context for patients 3 and 4, was effective in diagnosing acute Q fever by an average of 3 days earlier than serology, despite negative C burnetii PCR studies during this window period. Third, timely access to noninvasive diagnostic methods like Karius testing may mitigate the need for additional invasive procedures like liver biopsy in patient 1, thereby reducing risks associated with invasive studies and length of hospitalization. Microbial cfDNA NGS is traditionally considered expensive. However, as this assay becomes increasingly available, it could potentially provide a cost-effective, unbiased, "shotgun" approach to diagnosis, particularly when compared to obtaining and interpreting a battery of pathogen-specific assays in patients with nonspecific clinical presentations like fever of unknown origin. However, further studies are needed to evaluate the appropriate timing and context in which Karius test may be most beneficial as an adjunct to standard clinical practice. Limitations to consider include the general availability of this modality and the turnaround time associated with sendout testing to a referral laboratory. While additional studies are needed to evaluate the sensitivity/specificity and costeffectiveness of Karius testing relative to the current standard of practice for diagnosis of acute Q fever infection, we highlight within this report the multiple potential roles for microbial cfDNA NGS testing in supporting the timely diagnosis and treatment of acute Q fever. Notes Author contributions. N. R. and R. B. K. contributed equally to the conception, preparation, and review of this manuscript. O. A. S. contributed to the conception, design, preparation, and manuscript review. Patient consent. Patients included in this study have provided research authorization for the confidential clinical use of information to Mayo Clinic. Potential conflicts of interest. The authors: No reported conflicts of interest.
2022-12-14T16:05:56.657Z
2022-12-12T00:00:00.000
{ "year": 2022, "sha1": "805ccde335b5260d27a1dc84d2c9ba1057025beb", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6ff5f5896008b2e2615ae7eab7fd5c99bd66d6e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248210663
pes2o/s2orc
v3-fos-license
Implementation of a Psychiatric Consultation for Healthcare Workers during First Wave of COVID-19 Outbreak Background: Prevention and management strategies of mental suffering in healthcare workers appeared as important challenges during the COVID-19 pandemic. This article aims to: (1) show how potential psychiatric disorders for healthcare workers (HCW) during the first wave of the COVID-19 outbreak were identified; (2) present an activity report of this consultation; and (3) analyze and learn from this experience for the future. Methods: We performed a retrospective quantitative analysis of socio-demographic and clinical data, in addition to psychiatric scales scores for the main potential psychiatric risks (PDI, PDEQ, PCL-5, HADS, MBI-HSS) and post-hoc qualitative analysis of written interviews. Results: Twenty-five healthcare workers consulted between 19 March 2020 and 12 June 2020. We found 78.57% presented high peritraumatic dissociation and peritraumatic distress, 68.75% had severe anxiety symptoms, and 31.25% had severe depression symptoms. Concerning burnout, we found that 23.53% had a high level of emotional exhaustion. In the qualitative analysis of the written interview, we found a direct link between stress and the COVID-19 pandemic, primarily concerning traumatic stressors, and secondarily with work-related stress. Conclusions: Early detection of traumatic reactions, valorization of individual effort, and limitations on work overload appear like potential key preventive measures to prevent psychiatric complications for healthcare workers in the context of the COVID-19 pandemic. Introduction The outbreak of coronavirus disease-19 (COVID-19) emerged in December 2019 in Wuhan (China), and has so far consisted of three waves which have already given rise in the world to 162,500,000 confirmed cases and has killed 3,369,259, as of May 2021 [1]. To date, in Switzerland, 674,138 cases have been laboratory-confirmed, equating to 7833/100,000 inhabitants, and 10,080 deaths (117.13/100,000 inhabitants) caused by COVID-19 have been recorded [2]. From the pandemic's onset, healthcare workers (HCW) have needed to adapt to this unprecedented situation to avoid hospital saturation, and limit both deaths and severe complications. For HCW, uncertainty about the length of the pandemic, the need to adapt to new care management due to the outbreak, and the lack of knowledge about COVID-19 were the most prominent stress factors. These challenges were identified earlyon as risk factors of psychological suffering for hospital workers [3]. During the initial lockdown period from 16 March to 19 April 2020, HCW were confronted by the virus in the name of the collective good and quickly became, by force of circumstance, the "soldiers on the frontline". In many countries, this image of HCW was solidified by media and societal rituals, manifested by applause given at the end of each day. Within this context, it may have been difficult for HCW to recognize and acknowledge their psychological suffering. The first published studies on psychological suffering of HCW during the COVID-19 pandemic confirmed the existence of this heavy psychological burden, especially for anxiety, depression, and insomnia [4,5]. The influence on mental symptoms of age and gender were considered as how the HCW presented itself at the consultation and not the biological attribute according to Sex and Gender Equity in Research-SAGER, occupation, specialization, the types of activity performed, and their proximity to COVID-19 patients were also highlighted [4,5]. Moreover, post-traumatic stress disorder (PTSD) risk in HCW during previous coronavirus epidemics was highlighted, as well as during the COVID-19 pandemic [6]. Certain variables were found to be particularly relevant as PTSD risk factors, such as the female gender, older age, exposure level, working role, years of work experience, social and work support, job organization, presence of quarantine, marital status, and resilience factors, such as coping styles and social support [6]. Prior to implementing a consultation facility (CovidPsy) for HCW, we attempted to identify potential mental health risks to design our consultation methodology and adapt the terms of care that could be offered. Data from previous respiratory pandemics of severe acute respiratory syndrome (SARS) in 2002-2003, and Middle East respiratory syndrome (MERS) in 2012 were used, as well as information from foreign media outlets, such as those in China where the pandemic began. Then, these data were adapted to the local context before implementing the CovidPsy consultation. We were aware that the experiences and the management of the COVID-19 crisis varied greatly from one country to another, but at this period, we had no published studies concerning the impact of COVID-19 pandemics on mental health for HCW, and no available data for Switzerland or a comparable country, so we decided to consider the available data and to consider the potential psychiatric issues that were previously described even in different contexts of crisis and to think about potential psychiatric issues knowing the specificities of the COVID-19 pandemics as explained above. We distinguished two main categories of stress factors: a) work-related stress during COVID-19 outbreak; and b) direct stress consequences of COVID-19. Faced with an uncontrollable viral outbreak and its treatment, HCW might have felt powerless to help their patients. For example, as the influx of patients increased, urgent improvised decisions had to be made to spare care resources, which at the time were almost entirely dedicated to managing COVID-19, and specifically implementing algorithms to prioritize care. As a result, the clinical activity of HCW was brutally and rapidly transformed, shaking the very basis of their professional identity. For many, time stood still from the start of this first wave of the health crisis. Unpredictability contaminated all aspects of the HCW's daily life, and especially when exposed to certain COVID-19 patients who experienced rapid deterioration. In our university hospital, HCW were "requisitioned", meaning that their vacation was suspended for an indefinite period of time, and were given rare instances of time to decompress. Faced with difficulties anticipating healthcare resource needs during this unprecedented crisis, as well as absenteeism linked to staff contamination by SARS-CoV-2, HCW had to change medical units accordingly, regularly modifying their schedule, thus creating very unstable and intense working conditions. In light of the pandemic, non-COVID-19 clinical activities had to be abruptly suspended, giving the impression of patient abandonment. The issue of patient triage arose, on the grounds of efficiency: COVID-19 patients who needed to be given priority in intensive care units were those for whom an improvement in prognosis was considered more likely. Therefore, the management priority was for COVID-19, to the detriment of other healthcare activities and their consequences [7]. In some instances, these situations led to value conflicts and a profound sense of loss in many HCW who no longer felt useful in the exercise of their profession. Finally, during the first wave of the COVID-19 outbreak, concomitant losses of control and sense, as well as workload, increased, which are recognized as important factors of burnout, as described by Maslach [8]. HCW are known to be exposed at higher risk of burnout and their complications [9], and some work factors such as lack of input or control for physicians, excessive workloads, inefficient work processes, clerical burdens, have been identified for physicians [10]. For these reasons, we considered it was important to investigate burnout and treat it in the context of this psychiatric consultation for HCW. In the course of their work, HCW were exposed to potential contamination by SARS-CoV-2 and subsequent increased risk of transmission to loved ones. The contradictory information surrounding the subject contributed to the caregivers' insecurity, as it was objectively impossible to guarantee protection, due to a lack of knowledge on the transmission modes of the virus. Specifically, HCW could find themselves faced with unpredictable, sudden, and numerous deaths, quantities they were not accustomed to, including those in intensive care units, even if they had previous experience with critical situations. In their personal lives, like the rest of the general population, many HCW were also impacted due to the illness and deaths of those closest to them. The confluence for many HCW of significant stress, the prioritization of professional duties above all others, and the restriction of social contacts and leisure time outside the hospital, has contributed to increases in the risk for psychological distress such as anxiety, depression and traumatic experience during this period. Furthermore, holidays for hospital staff were abolished in most countries. In some cases, HCW were beginning to report situations of very painful rejection from their relatives for fear of contamination. Faced with a highly stressful situation, the subject's coping skills may be overwhelmed and give rise to reactive depressive or anxious symptoms. Depending on the level of personal resources available to mobilize coping strategies, and confronted with the same situation, some workers will develop anxious and/or depressive symptoms, whereas others may be able to adapt. Interindividual variability in reactions to a very stressful and unprecedented situation such as the COVID-19 pandemic was expected. This COVID-19 context created situations of potential or actual death of patients, which meet the definition of traumatic events, as defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [11]. Subjects who had a history of trauma, either related or not to their professional activity, and/or a psychiatric history of depression, had a greater vulnerability for the risk of acute stress disorder (ASD) or post-traumatic stress disorder (PTSD) [12]. Importantly, PTSD risk factors have been well studied during previous coronavirus epidemics [13]. On the basis of past experiences and the identification of risk factors directly due to the COVID-19, a risk of PTSD in HCW seemed to be important and warranted further investigation. PTSD is associated with a poor prognosis and an important risk of comorbidities such as substance and alcohol abuse [14]. Prevention strategies are known to reduce the risk of chronic evolution to PTSD, such as an early identification and care for people at risk for PTSD, even if the effectiveness of some interventions like cognitive-behavioral therapies (CBT), eye movement desensitization reprocessing (EMDR), and pharmacological strategies require further study [15,16]. Important emphasis is placed on the potential psychiatric complications of this health crisis. However, in traumatic situations, psychological benefits grouped under the emerging notion of post-traumatic growth can, on the contrary, develop in the aftermath of a traumatic event. Authors of a qualitative review on disaster-exposed organizations identified several protective factors after a disaster: training, experience, and perceived (personal) competence; social support; and effective coping strategies. Post-traumatic growth can provide a greater appreciation of life and relationships, enhancing self-esteem and providing a sense of accomplishment and better understanding of an individual's work [17]. The exploration of these protective factors appeared as important regarding the risk factors for PTSD in the time of COVID-19. At the Geneva University hospitals, in the beginning of the first wave of COVID-19, we implemented several strategies in order to prevent and manage early psychological suffering among HCW. Psychologists in COVID-19 units and hypnosis sessions were deployed. The service of liaison psychiatry, the staff health service and the Health Care Directorate received an official mandate on 16 March 2020 from the medical director of the University Hospital of Geneva, and three days later, on 19 March 2020, a psychiatric consultation was offered to the hospital workers. Our paper aims to (1) show how potential psychiatric disorders for HCW during the first wave of the COVID-19 outbreak were identified; (2) present an activity report of this consultation; and (3) analyze and learn from this experience for the future. Participants All HCW (clinical and non-clinical HCW) of our university hospital were able to ask for a consultation at the permanence without an appointment, not only employees who were in charge of COVID-19 patients. We have counted that 25 HCW consulted, and that we provided 52 consultations from the 16 March to the 12 June 2020. Of the 25 employees, only 18 gave informed consent, which allowed us to retrospectively analyze their personal and clinical data for the study. The mean age was 40, 67 years old (25-58 years old), with a majority of women (14; 77.78%). We found that 9 nurses, 2 physicians, and 2 medical students that were requisitioned in COVID-19 wards during the first wave, as well as 4 other clinical HCW, and 1 administrative hospital employee, consulted. We found 72.22% (n = 13) of HCW consulting at the permanence were frontline health care workers, meaning those who interacted directly with COVID-19-positive, or potentially positive, patients, 77.78% were women (n = 14), and 83.33% married or living in common law (n = 15). Interventions Recommendations for setting up support systems for caregivers were quickly disseminated by the World Health Organization at the start of the crisis [3,18] based on previous epidemics, highlighting the need to organize the system for the prevention and the management of the mental suffering of the HCW. Within a few days in our hospital, the CovidPsy psychiatric consultation service was not only established, but support psychologist positions in the COVID-19 wards hypnosis sessions and a hotline were deployed. Material aids were offered, such as parking spaces, accommodation in hotels for people who lived far away, and free meals. After the first step, when we received an official mandate, we made a request to two consultation offices. We implemented a 7 day a week, 9 AM to 6 PM consultation service, to receive any HCW requesting help, free of charge and without an appointment. We chose a name and conceived of a framework for the consultation. The psychiatric consultation team was composed of hospital and private psychiatristspsychotherapists, and clinical specialist nurses in psychiatry, whose usual clinical activity was reduced, thanks to the solidarity of other psychiatric services and the Health Care Directorate.The final team for the psychiatric consultation was composed of 9 psychiatristspsychotherapists and 8 clinical specialist nurses in psychiatry to ensure the presence of at least one psychiatrist-psychotherapist and nurse at all times during opening hours. The psychiatric intervention policies (guidelines for the intervention, organization of the permanence, and establishment of the duty schedule) were defined. Our crisis intervention and algorithm models were inspired by disaster psychiatry [19]. They consisted of a preventive model based on the identification of traumatic stressors and high-risk subjects of psychological suffering [12]. Based on data and the knowledge from previous epidemics of how mental health is impacted in hospital workers, we identified the following risks: (1) burnout; (2) trauma disorders (acute stress disorder trauma, vicarious trauma, post-trauma stress disorder); and (3) anxiety and depression symptoms. We proposed a systematic screening of these risks at the beginning of the consultation using the French version of The Pocket Guide to the DSM-5 TM Diagnostic Exam, whose license has been obtained for each survey [20] and completed the evaluation with some questions on psychiatric history, their family and social circles, and working conditions. Depending on this evaluation, this was followed by a personalized therapeutic intervention using specific guidelines. Consultations were carried out by a psychiatrist-psychotherapist and a clinical specialist nurse in psychiatry to encourage complementary interventions and to partition emotional burden in a faceto-face session. If acute stress symptoms were identified, recommended interventions after a traumatic event such as defusing intervention, psychoeducation intervention on PTSD, and/or eye movement desensitization reprocessing (EMDR) for recent trauma were provided. If burnout symptoms were identified, we gave feedback about these symptoms to the hospital workers directly and suggested a sick leave. In front of anxiety symptoms and/or acute stress symptoms, we used stress management tools like safe place, cardiac coherence and mindfulness interventions. For all the clinical situations, analyses of stress factors at work were conducted with the person who consulted and a search of strategies to cope with them was undertaken. Personal resources were sought and reinforced as much as possible. Medications could also be prescribed depending on the psychiatric evaluation. We proposed short interventions which should not exceed three consultations, with a few exceptions. Indeed, we considered that if the collaborators required a longer intervention, that their follow-up should be able to be continued outside of this permanence and referred them. This was according to emergency and the short interventions model, and because of a lack of availability from the team to make longer follow-ups. We organized a referral for another follow-up if required because the HCW was not clinically sufficiently improved after three sessions at our permanence, or if the HCW asked for other type of followup (private psychiatrist-psychotherapist or psychologist-psychotherapist or consultation center depending on the hospital). Anyway, HCW knew that they could contact us at any moment after the intervention if they need via the hotline, but we did not provide any systematic evaluation after the intervention that was not built in a research context but in a clinical goal. The number of sessions depended on the clinical assessment and the therapeutic goals that we defined with the HCW. The number of sessions for one HCW varied from 1 to 11 sessions (µ = 2.7), and the duration by session varied from 45 to 150 min (µ = 89 min). Materials We collected personal data (data birth, phone number, marital status, phone number, mail), information on working conditions (position held, department, work in COVID-19 unit, change of service due to COVID-19 outbreak, . . . ), medical and psychiatric history and previous trauma, risk factors for severe forms of COVID-19, and contamination by SARS-CoV-2. We chose to use a systematic screening with scales to look for these risks before each consultation to adapt the intervention to the needs of the worker. HCW completed different validated tools in their French versions to look for main psychiatric situations that were expected, before the intervention (methods previously described): (1) The Maslach Burnout Inventory-Human Services Survey (MBI-HSS) was used in this survey by obtaining a license (see Supplementary Materials), and consisted of three dimensions: emotional exhaustion (EE), depersonalization (DP), and personal achievement (PA). The level of burnout was considered high if EE was ≥27, DP was ≥13, and PA was ≤21; moderate if EE was 17-26, DP was 7-12, and PA was 38-22; and low if EE was ≤16, DP was ≤6, and PA was ≥39 [21,22]. (2) The Hospital Anxiety and Depression Scale (HADS) was also used, which assesses transdiagnostic symptoms of anxiety and depression in patients with a somatic disorder, using a cutoff total score of 11 for anxiety and for depression [23]. (3) The peritraumatic distress inventory (PDI), which screens for distress symptoms during and immediately following a traumatic event, using a cutoff at 15 to identify a high risk of future PTSD, was used [24,25]; and also (4) the Peritraumatic Dissociative Experiences Questionnaire (PDEQ) which screens for dissociative symptoms such as depersonalization and derealization during and immediately following a traumatic event, using a cutoff at 15 to identify a high risk of future PTSD [26,27]. (5) The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5) was also used, which assesses current symptoms of PTSD, using a cutoff score of 33 to identify a PTSD diagnosis which was given instead of PDI and PDEQ, only in the case of PTSD diagnosis made in the presence of PTSD criteria present more than one month according to the DSM5 [20], to assess PTSD severity [28,29]. Quantitative Analysis Descriptive statistical analyses were made using Excel ® and an R ® software package provided by the R Foundation for Statistical Computing. Qualitative variables were expressed as frequencies and percentages. Quantitative variables were expressed as means with minimum and maximum values. Qualitative Analysis We also performed a qualitative analysis of the 47 written interview notes of HCW that consented to participate in the study: one of the researchers of the team systematically analyzed the semi-structured interview notes written by clinicians of the psychiatric consultation when they evaluated the psychiatric symptoms of the HCW according to the DSM5, and looked for frequent themes of difficulties linking with the work expressed by HCW that emerged. Using content analysis methodology, two coders reviewed all interview scripts for recurrent themes, which they then categorized and sub-categorized, while comparing emerging categories to each other to determine their substance and significance [30]. A recurrent theme was defined as a theme occurring more than twice in the interviews of two different HCWs. For a theme occurrence to be retained, it had to be noted by the two coders in their qualitative analysis of the interview report. In the event of a coding discrepancy, a discussion between the coders and the rest of the research team took place, in order to conclude on the appropriate coding and improve inter-rater reliability. According to the triangulation method, results were shared with members of the research team who did not contribute to the qualitative analysis to check if the results looked coherent. Perceived Difficulties at Work in Link with COVID-19 Outbreak (N = 18) N(%) Traumatic stressors -Fear of contamination at work 7 (38.89) -Feeling insecure 6 (33. The reviewers gathered five themes in the first category of global themes that we called" traumatic stressors", which correspond to exposure to multiple deaths, fear of contamination, feeling insecure, and feeling guilty. Hospital workers mostly expressed difficulties in coping with uncertainty related to COVID-19, and especially coping with contradictory information about how contamination occurred, and the lack of knowledge about the infection itself. In general, HCW suffered from having to adapt many changes, which created feelings of insecurity. We found that 50% (n = 9) expressed a suffering in line with their direct exposure to the multiple deaths, and that constitutes a traumatic event according to ASD and PTSD criteria in DSM5. We found that 44.44% (n = 7) were afraid of being contaminated and subsequently infecting family members or their social circle. This fear can be relayed to the fear to reexperience what they lived in a traumatic way with their patients, and this criterion of fear belongs to ASD criteria and PTSD criteria diagnosis such as in the question concerning PCL5. We found that 33.33% (n = 6) felt insecure, and this feeling can be relayed with the anxiety that is assessed in HADS and is also included in the intrusive symptoms of ASD and PTSD. We found that 11.11% (n = 2) felt guilty, knowing that there is a specific question in PCL5 concerning the fact of blaming oneself (10th question in PCL5). Discussion The high majority of the HCW who came to us worked in COVID-19 units and were either physicians or nurses, confirming what others have found regarding risk factors of stress related to working in COVID-19 units during this crisis [4,5]. The burnout and traumatic stress-related disorders (ASD or PTSD), such as anxiety and depressive symptoms, were found in high proportions for the HCW that consulted the permanence (Figures 1 and 2). Most individuals suffered from working conditions related to their own safety, even if they themselves were not considered at risk to develop a severe form of COVID-19. For certain HCW, they did not feel enough supported by their colleagues, and/or hierarchy. In previous studies, the high prevalence of PTSD has been confirmed, since certain variables were found to be of particular relevance as risk factors as well as resilience factors, including exposure level, working role, years of work experience, social and work support, job organization, quarantine, age, gender, marital status, and coping styles [6,31]. Fear of contamination concerned 44.44% (n = 7) of the HCW who consulted, similar to an Italian study based on an online questionnaire that concluded a higher risk perception, level of worry, and knowledge as related to COVID-19 infection compared to the general population [32]. In our study, participants presented burnout symptoms: 35.29% presented a moderate and 23.53% a severe emotional exhaustion level; 17.65% a moderate and 23.53% a severe depersonalization level; and 11.76% a low and 35.29% a moderate personal achievement (Figure 1). These results confirm those of an Italian study that showed five weeks after the beginning of the outbreak that almost 33% presented high scores of emotional exhaustion, and almost 25% reported high levels of depersonalization [33], with a meta-analysis that identified a burnout prevalence of 37.4% [5]. By analyzing the t-tests results for themes, we can propose some hypotheses concerning the mechanisms underlying the psychiatric issues for HCW. The fact that we found an association between the feeling of incompetence and MBI-EE score was not surprising knowing the questions included in this subscore that concern incompetence. The influence of the presence of conflict of values in HCW and the lack of recognizing on the MBI-DP scores confirmed known data concerning risk factors of burnout [8] (Table 3). To the best of our knowledge, there is no previous qualitative study concerning physical psychiatric consultations with HCW during the first wave of COVID-19, with most interventions consisting of hotlines [34,35], and not face-to-face consultations which offer more in-depth care. Although we did not obtain follow-up data, we noted only a few participants needed to be referred for psychiatric or psychological follow-ups at the end of the CovidPsy care, which was sufficient for the majority of the HCW who consulted. Indeed, 72.22% of the HCW were sufficiently clinically improved after the end of the CovidPsy consultation and we could stop the CovidPsy consultation without referral to a psychiatrist or psychologist, suggesting that the intervention was early and in a preventive process of psychiatric issues. This suggests that the efficiency of early detection and care of HCW with psychological suffering to reduce long term health and work consequences would need to be confirmed in a prospective study design. The work-related stress linked with work overload, lack of recognition, and feelings of abandonment by the hierarchy, suggest certain management principles at hospitals, such as reinforcement of staff during a crisis, supporting the efforts of HCW, and accompanying them, are necessary. Some authors suggested a theoretical model of emotional contagion that was observed in other groups during pandemics, which could explain our results regarding the psychiatric issues in the group of HCW [36,37]. There are several limitations in our study. First, this description of a consultation activity and the psychiatric screening during the first wave cannot be generalized because of its small and sample size and its auto-selected characteristics. We chose to include only the HCW who decided by themselves to come at the permanence and who agreed to participate to the study, and so the sample was limited. There is a necessary recruitment bias because only the HCW who considered they needed support were included, although other HCW in fact needed it but did not come, and reciprocally, perhaps some HCW came although they did not require it. Second, HCW who came to the psychiatric consultation constituted a small percentage of overall HCW (25 on 13 557 HCW). The psychiatric consultation was only one of several strategies implemented during the first wave in terms of the psychological support. Therefore, the first hypothesis to explain the low number of consultations is that HCW felt helped by other implemented strategies (e.g., psychological support provided close to care units, hypnosis sessions, and hotlines). The presence of psychologists within the COVID-19 units, who could be solicited for speaking in one-to-one settings, or for group interventions, likely played an extremely important role. These interventions were notably different from those coming from CovidPsy consultation. One could also hypothesize that shame and fear of judgement, due to the stigma of psychiatry, could explain this small number of consultations. Workers expressed that they were reluctant to benefit from a psychiatric consultation within the hospital during working hours as they considered that when they were present, their concentration should be on providing care, suggesting that the hospital setting did not facilitate the use of the permanence. Finally, we can argue that in the heart of the first wave, the vast majority of HCW did not feel the need to ask for help because they were motivated by their goal and their pride to accomplish their mission. During the post-crisis phases and successive COVID-19 waves, one can imagine that psychological support could still have been useful and perhaps more needed with time. The first reason may be a delay in psychic distress from the event, by virtue of an afterthought effect. The second reason relates to possible exhaustion over time with these additional COVID-19 waves and the absence of any possibility to recover between them. Therefore, we will have to be vigilant about potential long term psychological effects, particularly if we consider the prospective disillusionment and reconstruction phases of human services workers following a disaster [38]. Furthermore, we analyzed semi-structured interviews notes that were not exact verbatim records of the consultations, having been transcribed and interpreted by the psychiatrist in a clinical context. We did not record the interviews because it was a retrospective study about clinical notes obtained during semi-structured interviews. In fact, it is not generally acceptable to record clinical interviews in clinical practice. We recall that we did not have a research goal at the origin. Without systematic available verbatims, but just interview notes of the HCW written by the interviewer, we recognize that there was probably a bias of interpretation. We should haven recorded the interviews of the HCW in an originally scientific goal. Finally, we did not explore protective factors that could have been helpful for HCW during this unprecedented crisis, except in regards to entourage support, and we do not have long-term follow-ups to assess the evolution of the intervention. It limits the understanding of the mechanisms of psychiatric complications for HCW in the context of COVID-19 pandemics. In a prospective study, we should have anticipated this need for a larger assessment of intrinsic and extrinsic protective factors. The experience of this consultation activity should help us in the future in the case of other epidemic waves or health crises. Generally, the difficulties that we encountered for the implementation of the CovidPsy consultation was in line with the emergency context and having to take very quick decisions to start the consultation. It is very important afterwards to be able to learn from this experience of creating permanence in emergency now that the health situation has calmed down. Considering the low rate of consultation at the psychiatric permanence, this consultation seemed to have been organized and proposed too early, and should have been more useful later. In the future, this kind of consultation should be maintained for longer and well after the peak of hospitalizations. In all likelihood, this type of consultation would have been useful following the first wave of COVID-19, however we could not extend it due to the resumption of the usual clinical activities of psychiatrists and clinical specialist nurses in psychiatry of the psychiatric consultation. Indeed, while we were already deploring four successive waves of COVID-19 in the fall of 2021, the direction of our hospital communicated about a high rate of absenteeism. The hypothesis of delayed psychic consequences can be legitimately put forward. For the next important epidemic waves or similar health crisis, one will have to anticipate the necessity of providing psychiatric care for HCW and to find an organization that will be compatible with the usual activities, for example the creation of persistent personal workplaces to engage with mental health specialists. We showed that the qualitative analysis identified subjective information about the difficulties that caused distress, and this will be helpful in elaborating preventive strategies for hierarchy concerning the negative effects of lack of recognizing and support, such as those of service change. The intervention of psychiatrists and nurses with colleagues in mental distress is not easy because it is not a common situation. Training sessions were the occasion to have common clinical references for the activity. Once per week, a coordination meeting was held with the people in charge of the support systems. Moreover, several training sessions were provided to the team, recalling an important theoretical-clinical basis for colleagues who were less familiar with this clinical field and to transmit guidelines for the permanence, from the reception of the healthcare worker to the end of the care. We additionally provided sessions on advice for preventing exhaustion and vicarious trauma for the psychiatric consultation team who were exposed to heavy emotional burden arising from caring for their colleagues, as well as to the global effect of the pandemic. Daily team meetings sessions were organized to analyze and discuss the clinical situations and their care. In the future, attention should be paid to potential psychic complications of the psychiatric team, and prevention tools like those we used should be implemented like team exchanges, and prevention sessions about vicarious trauma. Moreover, for all HCW, sessions on the prevention of psychiatric issues in the workplace should be organized throughout training and regularly throughout professional life to reduce these risks in the case of other epidemic waves or health crises. Conclusions This psychiatric consultation for HCW experience provides confirmation of the psychiatric consequences during the first wave of COVID-19, and the type of responses to prevent and early treat potential psychiatric complications. ASD, PTSD, burnout and anxiety symptoms were the most frequent psychiatric outcomes observed. Long-term and psychiatric consequences on mental health are expected in HCW that worked during the first wave of COVID-19. A psychiatric permanence for HCW allowed early intervention to prevent and treat psychiatric issues in the context of COVID-19 pandemics. Further studies would be needed to assess the efficiency of this kind of intervention for HCW. Considering the risk of delayed psychiatric issues, the need of intervention should not be limited in the time and should be offered to HCW even after the crisis period. Institutional Review Board Statement: To retrospectively analyze personal data, we asked all HCW who consulted at the CovidPsy consultation between the 19 March 2020 and the 12 June 2020 for their informed consent in accordance with the decision of the 20 August 2020 of the Cantonal Commission for the Ethics of Research on Human Beings of Geneva, an official commission of the State of Geneva (n • 2020-02036). The need for ethics approval was waived by this Commission. We obtained written consent from all the participants, but no administrative permissions were required to access and use the dataset/medical records. Informed Consent Statement: We obtained written consent for publication from all the participants. Data Availability Statement: Not applicable.
2022-04-17T15:11:19.205Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "1cddc60ad42a640a0fa7afb60ce495c4e2806810", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/8/4780/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e29f2a4afe044a50e9d3f01faa65372b81d2178", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
249940856
pes2o/s2orc
v3-fos-license
Genomic Analysis of Waterpipe Smoke-Induced Lung Tumor Autophagy and Plasticity The role of autophagy in lung cancer cells exposed to waterpipe smoke (WPS) is not known. Because of the important role of autophagy in tumor resistance and progression, we investigated its relationship with WP smoking. We first showed that WPS activated autophagy, as reflected by LC3 processing, in lung cancer cell lines. The autophagy response in smokers with lung adenocarcinoma, as compared to non-smokers with lung adenocarcinoma, was investigated further using the TCGA lung adenocarcinoma bulk RNA-seq dataset with the available patient metadata on smoking status. The results, based on a machine learning classification model using Random Forest, indicate that smokers have an increase in autophagy-activating genes. Comparative analysis of lung adenocarcinoma molecular signatures in affected patients with a long-term active exposure to smoke compared to non-smoker patients indicates a higher tumor mutational burden, a higher CD8+ T-cell level and a lower dysfunction level in smokers. While the expression of the checkpoint genes tested—PD-1, PD-L1, PD-L2 and CTLA-4—remains unchanged between smokers and non-smokers, B7-1, B7-2, IDO1 and CD200R1 were found to be higher in non-smokers than smokers. Because multiple factors in the tumor microenvironment dictate the success of immunotherapy, in addition to the expression of immune checkpoint genes, our analysis explains why patients who are smokers with lung adenocarcinoma respond better to immunotherapy, even though there are no relative differences in immune checkpoint genes in the two groups. Therefore, targeting autophagy in lung adenocarcinoma patients, in combination with checkpoint inhibitor-targeted therapies or chemotherapy, should be considered in smoker patients with lung adenocarcinoma. Introduction Lung cancer is the second most common diagnosed type of cancer in men and women, after prostate and breast cancers, respectively [1]. The greatest number of deaths are due to cancers of the lung, which account for 25% of all cancer-related deaths [1]. Tobacco smoking is the most common cause for lung cancer [2]. One type of tobacco smoking is waterpipe smoking (WPS), where the smoke of the tobacco passes through water prior to being inhaled. WP use is on the rise globally [3], and there is a strong link between WPS and lung cancer [4,5]. Because of the toxicants present in WPS, smokers are exposed to a large amount and variety of chemicals, including many carcinogens [6,7]. WPS has been shown to result in the generation of free radicals, reactive oxygen species (ROS) and inflammation [8][9][10]. Previous studies have shown that WPS condensate (WPSC) treatment of lung cancer cell lines modulates cell plasticity. WPSC induced epithelial to mesenchymal transition (EMT), cancer stem cell (CSC) features, and an increase in inflammation and DNA damage [11,12]. The consequences of DNA damage depend on the cell type and on the extent and intensity of the stress and could activate senescence, autophagy, or cell death programs. Apoptosis functions to suppress tumor growth, while autophagy can be activated in different cells at different stages of tumor growth and has paradoxical roles as it can suppress or promote tumor growth depending on the type and stage of the tumor [13]. While apoptosis fulfills its role through dismantling damaged or unwanted cells, autophagy maintains cellular homeostasis through recycling selective intracellular organelles and molecules. Autophagy is activated by different metabolic stressors in the tumor microenvironment (TME), including hypoxia, nutrient deprivation, and inflammation. In the context of WPS, nicotine present in WPS and in cigarette smoke has been shown to induce bronchial epithelial cell apoptosis, senescence, and autophagy impairment in normal lung epithelial cells post treatment for up to 6 h [14][15][16]. The molecular switch between cell death and cell survival is a key determinant of cell fate and cancer progression. Tumor mutational burden (TMB) rises because of DNA damage response and repair gene alterations, which have direct implications on the immune cells' landscape. An increase in TMB is associated with a favorable response to immune checkpoint inhibitors (ICI) [17] as this can increase immunogenic neoantigen production and its subsequent presentation by antigen-presenting cells, such as dendritic cells (DCs), to CD8+ T-cells, thus promoting their anticancer activity [18]. ICI have been increasingly used in the treatment of non-small cell lung cancer (NSCLC), enhancing response rates and longterm survival but only in a fraction of treated patients [19,20]. The most used ICI-based therapy is anti-PD-1 or anti-PD-L1, which work to block the inhibitory signaling between PD-1, present on the surface of activated T cells, and its ligand PD-L1, expressed on tumor cells [21]. The aim is to revitalize the immune response and eliminate tumor cells. Currently, the application of ICI in NSCLC is determined based on high microsatellite instability (MSI), TMB, PD-L1 expression, and disease burden [20]. These determinants are clearly insufficient to ensure patient response, and other factors in the TME could additionally be involved. Indeed, the TME is a collection of cellular components, including tumor, immune, and endothelial cells, as well as non-cellular components, such as extracellular matrix and signaling factors, cytokines, and chemokines, all of which are functioning together in acidic, hypoxic and nutrient-deprived conditions [22]. Tumor-promoting immune cells, such as myeloid-derived suppressor cells (MDSCs), M2 macrophages and regulatory T cells (Tregs), tend to thrive in such an environment, while tumor antagonizing-cells, including CD8+ T cells and natural killer (NK) cells, tend to be inhibited or even excluded from the tumor site [22]. A better understanding of how these features merge in lung adenocarcinoma patients exposed to smoke is needed to better delineate their response rates following immunotherapy. Our study addresses the role of WPS on autophagy, on TMB in lung cancer cell lines and using TCGA datasets of lung adenocarcinoma patients with a history of smoking. We further investigated the immunological landscape in these datasets. In vitro, we observed an increase in apoptosis at early exposure times followed by an activation of autophagy at longer treatment duration. Long-term exposure up to 6 months in lung cancer cell lines identified an increase in TMB that was also depicted in our analysis of TCGA datasets. Further analysis of the immune landscape of lung adenocarcinoma patients identified no change in immune checkpoint inhibitors between smokers and non-smokers. We also observed an increase in NK cells and CD8+ T cells, coupled by lower T-cell dysfunction. However, there were lower dendritic cell numbers. The current studies point to autophagy as a potential target for treatment of lung adenocarcinoma patients with a history of smoking. Our results are suggestive of better prognosis of smokers with lung adenocarcinoma post immunotherapy treatment. Waterpipe Smoke Condensate Increases Apoptosis and Activates Autophagy in Lung Cancer Cell Lines We first investigated the cytotoxic effects of waterpipe smoke condensate (WPSC) and its impact on autophagy. For this purpose, both A549 and H460 lung cancer cell lines were treated with 0.5% WPSC. This WPSC concentration was previously found to cause only a small fraction of A549 and H460 cells to die [11]. Cell viability using the MTT assay at 24, 48 and 72 h was measured. As depicted in Figure 1A,B, A549 cells displayed reduced viability in response to WPSC, whereas H460 cells did not up till 72 h of treatment. The vacuolar (H+) ATPase (V-ATPase) inhibitor Bafilomycin A1 (BafA1) was used to inhibit autophagy [23]. We observed a decrease in cell viability in response to 100 nM of BafA1 in both cell lines. The concomitant treatment of BafA1 and WPSC resulted in an additive negative effect on cell viability that was significant at 72 h, indicating that autophagy pathways could be contributing to cell survival following WPSC treatment. Autophagy and apoptosis are both important in maintaining cellular homeostasis. Stress-inducing signals influence both apoptosis and autophagy, and while functionally distinct, a crosstalk between the two could play an important role in pathological processes, including cancer. As we observed a decrease in cell viability following WPSC treatment, we asked whether apoptosis was activated. Treating A549 and H460 cells with 0.5% WPSC up to 5 days (120 h) resulted in a decrease in cell viability with a gradual increase in apoptosis as measured by an increase in Annexin V/PI positive cells ( Figure 1C-F). WPSC increases apoptosis and autophagy in lung cancer cell lines. Cell viability in response to 0.5% WPSC was measured using MTT assay in A549 (A) and H460 (B) cell lines at 24, 48 and 72 h. Apoptosis was measured by flow cytometry. Cells were stained with a combination of Annexin V-FITC, propidium iodide (PI) following WPSC treatment, in A549 (C,D) and H460 (E,F). Results represent means of three independent experiments, and data represent mean ± standard error of mean. * p ≤ 0.05, ** p ≤ 0.01 and *** p ≤ 0.001. Despite the increase in apoptotic cells, a large percentage of the cells survived the WPSC treatment; up to 60% of A549 and 30% of H460 cells remained viable following 5-day exposure. We therefore examined whether autophagy was activated following WPSC treatment. One method for detecting autophagic flux is by measuring differences in the amounts of LC3-II in the presence of an autophagy inhibitor; we thus analyzed the increase in the ratio of LC3-II to LC3-I by western blot with and without BafA1. The amount of LC3-II in WPSC-treated cells increased further in the presence of BafA1, which indicates an enhancement of autophagic flux starting at 8 h and up to 24 h (Figure 2A,B). The ubiquitinassociated protein p62, which binds to LC3, is also used to monitor autophagic flux; as such, we analyzed the expression levels of p62 following WPSC treatment. Immunofluorescence indicated an increase in p62 puncta, and western blots demonstrated an increase in p62 levels ( Figure 2B). Because autophagy could promote cell survival, we analyzed whether WPSC in combination with autophagy inhibitors would result in a further increase in cell death. Pretreating the cells with BafA1 prior to WPSC exposure in A549 cells resulted in a slight increase in late apoptotic cells at 48 h when compared to BafA1-alone-treated cells. In H460 cells, the number of late apoptotic cells increased at 24 h, and necrotic cell death was more prominent at 48 h ( Figure 2F). This result indicates that both cell lines are susceptible to stress-induced cell death, and that autophagy is important in maintaining the surviving cells. Therefore, manipulating pathways of apoptosis, necrosis and autophagy in cancer cells could skew cell fate decisions. We next sought to investigate if this autophagy response is specific to smokers with lung adenocarcinoma, as compared to non-smokers with lung adenocarcinoma. We analyzed the TCGA lung adenocarcinoma bulk RNA-seq dataset with the available patient metadata on smoking status. Using random-forest-based multivariate modeling implemented in GeneSrF, we obtained the top 14 autophagy genes as the best predictors of smoking status [24]. We compared the fold change in expression of all autophagy genes between smokers and non-smokers ( Figure 2G). We also implemented our own random forest modeling using the randomForest package in R (model accuracy = 0.65, sensitivity = 0.96, and precision = 0.60; see methods). Using two feature importance techniques, meanDecreaseAccuracy and meanDecreaseGini, we found that there were four genes that were consistently reported as the top predictors of smoking status ( Figure 2H). The results showed an activation of autophagy in smokers, and among the differentially expressed genes, BNIP3 (Wilcoxon rank sum test, p-value = 2.16 × 10 −5 ) was significantly up-regulated in smokers, and SESN2 (Wilcoxon rank sum test, p-value = 1.67 × 10 −5 ), TRIM22 (Wilcoxon rank sum test, p-value = 2.9 × 10 −7 ) and TNFSF10 (Wilcoxon rank sum test, p-value = 1.74 × 10 −6 ) were significantly down-regulated in smokers ( Figure 2G). The list of additional top predicted genes can be found in Supplementary File S1 (see Supplementary Materials). WPSC induces autophagy in lung cancer cell lines. A549 and H460 cell lines were treated with 0.5% WPSC for 24 h. LC3I/II levels were monitored by western blotting using standard procedures with anti-LC3 and GAPDH as a loading control for (A) band intensity was quantified in (B). The immunofluorescence analysis of p62 protein was performed following 72 h WPSC treatment; cells were treated with 100 nM Baf-A1 for 24 h as positive control (C). Western blotting for p62 protein was performed by standard procedures with anti-p62, and anti-GAPDH as a loading control (D) band intensity was quantified in (E). Cells were stained with a combination of Annexin V-FITC and propidium iodide (PI) to measure apoptosis, following 100nM Baf-A1 pre-treatment and WPSC treatment for the indicated time points, in both cell lines (F). TCGA lung adenocarcinoma bulk RNA-seq datasets of all autophagy genes between smokers and non-smokers (G). Two feature importance techniques were used-meanDecreaseAccuracy and meanDecreaseGini-to classify the top predictors of the autophagy-related genes with smoking status (H). Representative images of confocal microscopic analysis of p62 (green) and DAPI (blue) are shown. Scale bar, 10 µm. Results represent means of three independent experiments, and data represent mean ± standard error of mean. Temporal Changes in Mutational Landscape of Long-Term Exposure to Waterpipe Smoke in Lung Cancer Cell Lines Genomes While high-throughput sequencing studies have previously reported whole-genome analysis at the genomic, transcriptomic and proteomic levels in samples from smokers compared to non-smokers [25][26][27][28][29][30], as well as in samples from lung cancer [31,32], to date, the genomic landscape in long-term WPS-exposed lung cancer cell lines remains unknown. We used NGS-based whole genome sequencing to analyze mutational burden in A549 and H460 cell lines exposed to 0.5% WPSC for up to 6 months. Our results indicate an overall increase in TMB (per Mb) in 3-month-treated samples that increased further in 6-month-treated samples (1 < medianTMB < 4; p-value < 0.05, Wilcoxon Rank Sum test) ( Figure 3A). We observed that there were more missense mutations and frameshift insertions, compared to frameshift deletions and nonsense mutations in both cell lines. An overall increase in the frame shift insertions in the 6-month-treated samples was observed compared to 3-month-treated samples; these were limited to 1 to 4 bps insertions of C or T of homopolymer lengths. No insertions of >1bp as repeats were found for either of the cell lines ( Figure S1, A549 and Figure S2 H460). When we analyzed missense mutations, we observed a greater number of transitions compared to transversions, specifically C -> T and T -> C mutations ( Figure 3B-E); these are not enriched at APOBEC target sites (the TCW motif). Finally, we analyzed the distribution of single nucleotide variants (SNV) across different chromosomes as a function of log 10 (inter SNV event distance). This allowed us to look for patterns of localized hypermutations or Kataegis, known to be implicated in various cancer types. We observed an increase in Kataegis on chromosome 19 in 6-monthtreated A549 and chromosome 1 in 6-month-treated H460 when compared to the respective three month treated samples ( Figure S3A-D). Together, these data indicate that WPSC exposure over time leads to an increase in tumor mutational burden. Mutations in cancer genes have been shown to occur at certain hot spots, providing an adaptive advantage to the cells and thereby getting positively selected during clonal evolution. We analyzed the genes that are mutated in response to WPS treatment in both cell lines. We investigated gene mutations with a large spatial clustering using clusterScore at z-score >2 and FDR < 0.01 (see Section 4). A clusterScore of 1 indicates the presence of reported mutations within clusters across all samples. In A549, ZNF99, PCDHB5, GPRIN2 and LILRB1 had clusterScores > 0.7 (cluster numbers: ≥5, 2, 2 and ≥1). In H460, FLG, PCDHA10, GPRIN2 and PCDHB13 had clusterScores > 0.7 (cluster numbers: ≥25, ≥1, >2 and ≥1). A complete breakdown of the clustering can be found in the Tables S1-S4. Next, we performed pathway analysis to identify differentially mutated oncogenic genes following long-term WPSC exposure. We identified genes in the MYC and NOTCH pathways that were mutated in 6-month-treated H460 samples but not in 3-month-treated samples ( Figure 4); these were MYC (mutation rate 50%) and PDE4DIP (mutation rate 75%), due to frameshift insertions and nonsense mutations. Mutations in these genes have not been reported previously as per the variant effect predictor (VEP) database. Genes that were differentially mutated in 6-month-treated A549 samples were PRX and RYR1 (75% mutation rate each) due to missense and nonsense mutations, and frameshift insertions. One missense mutation observed in PRX gene-rs268673: Ile921Met had already been reported in the dbSNP database, with a known moderate impact; however, all the additional mutations we observed in PRX and RYR1 genes have not been reported previously to the best of our knowledge. Additional differentially mutated genes can be found in Figure S4. In sum, we found an increase in TMB in six-month, WPS-treated cancer cell lines, with an increase in C to T and T to C transitions and frameshift insertions of 1-4 bp homopolymer lengths. We identified genes with an adaptive potential, with GPRIN2 being common across both cell lines. Finally, we found differentially mutated genes in response to the long-term exposure of WPS, including genes from the MYC and NOTCH pathways. Genes differentially mutated in 6-month-treated H460 samples were MYC (mutation rate 50%) and PDE4DIP (mutation rate 75%) (A,B), and genes differentially mutated in 6-month-treated A549 samples were RYR1 and PRX (75% mutation rate each) (C,D). Smoking Is a Key Determinant of TMB of Lung Adenocarcinoma Patients Although cancer cell lines are widely used as an in vitro experimental model in cancer studies, they do not constitute an ideal model for primary tumors due to differences in the microenvironment [33]. Furthermore, studies using smoke extract on cell lines do not parallel human smoking parameters because of variabilities in concentration and in the cell-to-smoke exposure interface in vivo vs. in vitro. In line with this, studying primary lung tumors and their microenvironment in smokers and non-smokers at a molecular level assumes a level of importance. We thus investigated lung adenocarcinoma (LUAD) molecular signatures in affected patients with long-term active exposure to smoke and compared them to patients who had not had any active exposure to smoke in their life. Because there are no studies on patients solely consuming WPS, as which would have been most relevant to our study, we took advantage of the large-scale TCGA molecular dataset on LUADs to compare the differences in molecular signatures in lifelong non-smokers versus tobacco smokers. We divided the patients into two groups based on their smoking status: (1) life-long non-smokers and (2) smokers. We first compared the TMB in smokers and non-smokers affected with LUAD. A higher TMB was observed in smokers compared to non-smokers medianTMB smokers = 4.5,medianTMB non-smokers = 1.09 p-value = 4.13 × 10 −10 Wilcoxon Rank Sum test with continuity correction) ( Figure 5A). In addition to the smoking status, several factors such as age, gender, tumor stage and metastasis status could affect the overall TMB state. We used two random-forest-modelbased feature importance techniques, Increase in Mean Square Error (IncMSE) and Increase in Node Purity (IncNodePurity), to assess the effect of smoking status alone while controlling for these confounding factors. We observed that smoking status remained among the top three important features that are important for TMB prediction with IncMSE = 4.5 and IncNodePurity = 161 ( Figure 5B). Smoke Exposure Is Associated with a Reprogramed Tumor Immune Microenvironment The immune microenvironment could have a key role in determining immunotherapy outcomes. To better understand these microenvironmental factors, we focused on four major signatures: (1) immune cell fractions associated with immunotherapy response, (2) the success of T-cell infiltration into tumors, (3) T-cell dysfunction within the tumor microenvironment and (4) the expression of immune checkpoint genes. The digital cytometer CIBERSORTx was first applied to examine immune cell fractions residing in smokers vs. non-smokers ( Figure 6). When compared to smokers, non-smokers had a higher fraction of the antigen-presenting dendritic cells (Wilcoxon rank sum test, p-value = 9.369 × 10 −5 ). However, they also had a higher fraction of the immunosuppressive M2-polarized macrophages (Wilcoxon rank sum test, p-value = 0.0048). Regarding smokers, they displayed higher cell fractions of anti-tumor M1 macrophages (Wilcoxon rank sum test, p-value = 0.05), as well as NK cells (Wilcoxon rank sum test, p-value = 0.017). Finally, we observed a higher cell fraction of Cytotoxic T lymphocytes in smokers when compared to non-smokers (Wilcoxon rank sum test, p-value = 0.04). No differences were found in other B-cell and T-cell fractions, including T-regulatory cells, with the latter being associated with immunosuppressive effects. To evaluate the functional state of infiltrating CTLs and their degree of exclusion from the tumor microenvironment, the TIDE (Tumor Immune Dysfunction and Exclusion) algorithm, TIDEPY, was utilized ( Figure 7). First, we observed a higher score of Cytotoxic T lymphocytes in smokers when compared to non-smokers (Wilcoxon rank sum test, p-value = 0.0036). This was calculated using five genes, CD8A, CD8B, granzyme A, granzyme B and Perforin expression. This effect remains after controlling for confounding factors such as age and gender using a multiple linear regression (MLR) model fit (coefficient smoking status = 0.67, 95% confidence interval = (0.22, 1.12), p-value = 0.003). Of interest, a lower read out for T-cell dysfunction score was observed in smokers as compared to non-smokers (Wilcoxon rank sum test, p-value: 0.0096). Regarding T-cell exclusion, which was based on the presence of immune-inhibitory cells (Cancer Associated Fibroblasts (CAFs), myeloid-derived suppressor cells (MDSCs) and M2 macrophages), no differences could be observed between smokers and non-smokers (Wilcoxon rank sum test, p-value = 0.1). Other markers such as microsatellite instability (MSI) and interferon gamma (IFN-γ) were also analyzed for differential expression between the two groups. There was no difference in IFN-γ levels between smokers and non-smokers (Wilcoxon rank sum test, p-value = 0.3). Furthermore, a higher median score of MSI, a result of defective mismatch DNA repair, was observed in non-smokers than smokers (Wilcoxon rank sum test, p-value = 0.028), albeit the distributions were broad. Finally, the expression of immune checkpoint genes (ICGs) in both groups was analyzed ( Figure 7). Expression was measured in terms of z-score (see Section 4. for details). While there was no difference in the expression levels of PD-1 (Wilcoxon rank sum test, p-value = 0.24), PD-L1 (Wilcoxon rank sum test, p-value = 0.32), PD-L2 (Wilcoxon rank sum test, p-value = 0.66) and CTLA-4 (Wilcoxon rank sum test, p-value = 0.52) between smokers and non-smokers, higher expression levels of co-inhibitory molecules B7-1 (Wilcoxon rank sum test, p-value = 0.0025) and B7-2 (Wilcoxon rank sum test, p-value = 0.0174) were observed in non-smokers. Similarly, other suppressors of antitumor responses had higher expression in non-smokers than smokers, namely, IDO1 (Indoleamine 2, 3dioxygenase 1) (Wilcoxon rank sum test, p-value = 0.27) and CD200R1 (Wilcoxon rank sum test, p-value = 0.001). Our analyses of TCGA data provide support for smoking in modulating lung adenocarcinoma patient's tumor microenvironment resulting in immune cell landscape variations. These would constitute potential key targets in therapy modalities. Discussion Accumulated evidence indicates that smoke plays a central role in the evolution of tumor ecosystem and immune escape mechanisms by tumor cells through its impact on immune plasticity and tumor heterogeneity. In this regard, we had previously observed that treating lung cancer cell lines with WPSC resulted in an increase in DNA damage [11]. Here, we asked whether WPSC interferes with the autophagic process and how this may influence the immune landscape in the lung of smokers. Our current data indicate an increase in apoptosis at early WPSC exposure times, confirming other published works [14,[34][35][36][37]. Furthermore, we noted an activation of autophagy following WPSC treatment. Autophagy inhibition resulted in an increase in apoptosis, highlighting a role for autophagy in sustaining cancer cell survival. The cells that escape apoptosis can either undergo autophagy or senescence. While elevated levels of autophagy induce cell death, inadequate autophagy can trigger cellular senescence [38], which we have previously shown is also induced following 8-day treatment with the same concentrations of WPSC [11]. While DNA damage potentiates different repair mechanisms to restore the damaged DNA, which, if unrepaired, would lead to the activation of cell death programs [39], autophagy has been shown to function in delaying apoptotic cell death in cancers as autophagy inhibition sensitizes cancer cells to chemotherapeutic drugs and/or ionizing radiation [32,[40][41][42][43] and is also shown to play a role in the inhibition of the immune response in cancers with high TMB [44]. In WPSC-treated cells, we measured an increase in TMB in vitro; TMB has been observed in several cancers with DNA damage repair gene mutations [45][46][47]. While we did not analyze the DNA damage repair gene status in our study, we did observe an increase in TMB in cell lines exposed to WPSC from 3 to 6 months exposure. Our analysis of the TCGA LUAD dataset reaffirms our results, where we saw an increase in TMB in patients with an active smoking status. Other studies have also addressed the effects of tobacco smoking on normal as well as lung cancer and found this to be associated with an increase in TMB [48][49][50]. We analyzed the genes that were affected with mutations and divided them into two categories: (1) genes with specific mutational hotspots that arise because of the treatment across all samples and (2) differentially mutated genes that only get mutated as the mutational burden increases in the 6-month-treated samples. Genes such as zinc finger protein 99 (ZNF99), a gene found to be mutated in NSCLC with resistance to etoposide [51], and FLG, a highly mutated driver gene found in lung cancer [52], GPRIN2 and PCDHB13 that has been found to be downregulated in NCSLC and that negatively correlated with pathological grade [53], were mutated in all treated samples in both cell lines with a 100% mutation rate. In addition, discrepancies in the results obtained in our study with respect to WPS exposure to cell lines and patients' data could be due to the significant role of the TME in modulating cancer cell behavior. WPS exposure could be modulating several biological pathways that would act upstream of DNA damage. Exposure to WPS induces significant alterations in inflammatory cytokines and oxidative stress markers in mice [8][9][10]54,55]. WPS exposure also induces hypoxia [56]. Reactive oxygen species (ROS) could also be generated because of an increase in apoptotic cell death [57], which could generate a positive feed-back to further activate autophagy pathways [58]. Upon modeling-based analysis of TCGA lung adenocarcinoma RNA-seq datasets, we found an activation of autophagy in smokers. The most significantly affected genes were BNIP3, SESN2, TRIM22 and TNFSF10. BNIP3 expression results in the initiation of autophagy by disrupting the beclin1/Bcl-2 complex [59], and BNIP3 protein has been reported to be overexpressed in several cancer types and to participate in enhanced tumor growth [60]. SESN2/Sestrin 2 is a stress-inducible protein that is induced under hypoxic conditions and is reported to be associated with oxidative-stress-induced autophagy [61,62]; indeed, the occurrence of cancers is associated with significant downregulation of SESN2 [63]. Interestingly, TRIM22 stimulates autophagy by promoting BECLIN 1 expression [64] and has also been shown to play a role in driving tumor growth and progression [65]. TNFSF10/TRAIL could induce autophagy in certain cancer cells [66]. Our results suggest that the genes predicted by our model can correctly classify smokers as smokers but could also misclassify non-smokers as smokers. This low accuracy of 0.65 (C.I:(0.5,0.78)) is due to excluding non-autophagy related genes in our analysis. Nevertheless, future treatment interventions based on the autophagy genes could be designed for smokers with a higher confidence than for non-smokers. The limitation of our analysis is that this dataset was analyzed for gene expression in smokers of any devices (cigarettes and others), due to the non-availability of studies that include patients consuming WPS alone. Several studies have shown evidence for the significant role for autophagy in the response to therapeutic treatments in cancers [67]. Because autophagy induction could be associated with resistance to therapy, concomitant targeting of autophagy pathways synergizes with cancer therapeutic drugs to enhance cell death [67][68][69]. On the other hand, pro-autophagic drugs have been used successfully to enhance apoptosis in resistant cells [67]. This is due to the turning on of autophagic cell death mechanisms. Our in vitro data support the mechanism that autophagy is important to maintain cell survival, however the plastic nature of tumor cells and their continuous plasticity in response to their microenvironment may require regular monitoring to assess more effective treatment strategies. How WPS alone affects the autophagy response and the genetic landscape in lung adenocarcinoma patients compared to non-WP smoker patients has yet to be fully elucidated. Unraveling the changes in the immune microenvironment in lung adenocarcinoma patients with a history of smoking could enhance our understanding of factors that could contribute to predicting response to immune checkpoint inhibitors. While various biomarkers of response have been validated and are being used in the clinic, the absence of efficacy in a fraction of patients underlines the need for further studies. We thus investigated the immunological landscape in LUAD patients with a history of smoking. We found a higher TMB, NK-cell infiltration, CD8+ T-cell fraction and lower dysfunction level in smokers as compared to non-smokers, even after controlling for various confounding factors. On the other hand, non-smokers seemed to display a more immunosuppressed state, with a higher infiltration of M2 pro-tumor macrophages. Our findings are in agreement with a recent study that showed that NSCLC patients who are previous or current smokers had a higher TMB and neoantigen load, accompanied by a higher infiltration of immune cells, compared to those classified as never-smokers [70]. However, unlike previous studies similar to ours, using statistical models like Random Forest and Multiple Linear Regression, we report for the first time that these results are not affected by, and are not a sole artifact of, other confounding factors; at least for the dataset that we analyze in the present study. Moreover, in accordance with our results, they also reported following mass cytometry (CyTOF) analysis of fresh NSCLC tissues, that smokers have a more immune-activated TME, while the TME of non-smokers is in an immunosuppressed or resting state [70]. Our findings further suggest a more complex relationship between smoking status and immune infiltration. A higher fraction of the immunosuppressive MDSC was present in smokers compared to non-smokers who displayed a higher infiltration of DCs and a higher level of MSI, which is a positive predictor of response to ICI. Considering other markers of response, no differences could be detected in expression levels of PD-L1, among other immune checkpoint genes. Autophagy activation has been shown to decrease the expression of histone deacetylases that downregulate PD-L1 expression [48], validating our findings. Interestingly however, B7-1, B7-2, IDO1 and CD200R1 had higher expression levels in non-smokers relative to smokers. Immune checkpoint inhibitors against IDO1, which negatively impacts T-cell differentiation, are currently being investigated in clinical trials [20]. Our results would suggest better efficacy of such agents in non-smokers compared to smokers. It is important to note that our findings are all based on in silico analysis of a single dataset and would require further validation in independent cohorts of lung adenocarcinoma. Nonetheless, they help shed light on the complexity that is the tumor immune microenvironment in smokers vs nonsmokers with LUAD and supplement the perspective that smoking is only a putative biomarker of response to immunotherapy. Our results provide the first comprehensive analysis, to the best of our knowledge, that would help plan better treatment interventions targeted at LUAD patients with a history of smoking. We also call out the need for carrying similar TCGA studies including information specifically on patients exposed to WPS alone. Studying tumor microenvironment in patients with a history of smoking with a focus on autophagy could provide a stepping stone for novel directed immunotherapy approaches. Waterpipe Smoke Sampling and Analysis Waterpipe smoke sampling and analysis was described previously [11]. Flow Cytometry Apoptosis assays were performed using APC Annexin V Apoptosis Detection Kit with Propidium Iodide (PI) (Biolegend, 640914 San Diego, CA, USA). Briefly, cells were plated at a density of 100,000 cells per dish in 35 mm dishes (Eppendorf 0030 700.112, Hamburg, Germany). Following WPSC treatment, the cells were collected at the indicated timepoints by trypsinization and subsequently washed with 1× PBS prior to labeling with Annexin V-APC and PI following the manufacturer's protocol. Acquisitions of 20,000 cells were performed using a Biorad S3E Cell Sorter and data processed using the FCS Express flow cytometry program (De Novo Software, Pasadena, CA, USA). Annexin V-positive cells were classified as apoptotic. Statistical Analysis Statistical analyses were carried out using GraphPad Prism Software version 9.3.1 (GraphPad Software, Inc, San Diego, CA, USA). All data are expressed as means ± SEM. Significant differences were found using two-way analysis of variance (ANOVA) followed by correction for multiple comparison using Tukey test. Immunofluorescence Cells were fixed in 4% paraformaldehyde (ThermoFisher Scientific 28906, Waltham, MA, USA) in 1× PBS for 10 min at room temperature. Cells were then washed with 1× PBS and permeabilized with 0.1% TX-100 in PBS for 15 min at room temperature. Prior to staining, cells were blocked in 2% BSA in 1× PBS for 1 h at RT. Cells were then stained with a primary and secondary antibody as per the data sheets followed by three 5 min washes after each antibody staining. Cells were then mounted on glass slides using Prolong gold antifade reagent (ThermoFisher Scientific P36930, MA, USA) and visualized on Zeiss LSM 800 with Airyscan. Whole Exome Sequencing Variant Analysis Whole exome sequencing (WES) was carried out for two non-small cell lung cancer cell lines A549 and H460. Each cell line was treated with water pipe smoke (WPS) and cultured for six months in two sets of biological replicates. Furthermore, two technical replicates were set up for a given biological replicate. Samples for sequencing for each set were collected at 3 months and 6 months. Untreated cancer cells were used as a control. The QiaAmp DNA Mini Kit was used to extract genomic DNA (Qiagen, Hilden, Germany). Exome libraries were prepared from 100 ng of genomic DNA using Ion AmpliSeq™ Exome RDY kit (ThermoFisher Scientific, A38264, MA, USA). With 293 903 total amplicons, this kit covers almost 97 percent of the exonic regions. The samples were barcoded using Ion Xpress Barcode Adapter 1-16 kit (ThermoFisher Scientific, 4474009, MA, USA). The libraries were purified using CleanPCR (Clean NA, GC Biotech, Waddinxveen, The Netherlands). Library quantification was performed using the Ion Library TaqMan Quantitation Kit (Ther-moFisher Scientific, 4468802, MA, USA). The libraries were loaded onto the chips using Ion Chef System (ThermoFisher Scientific, 4484177, MA, USA) by utilizing Ion 540 Chef Reagents. Two samples per chip were loaded in equimolar concentrations (40 picomolar) and were sequenced on Ion S5 XL sequencer (ThermoFisher Scientific, MA, USA). The raw data were aligned with the hg19 version of the genome using Ion Torrent Suite (TS) software, and the bam files were processed for variant calling using low-stringency somatic variant and indel calling. VCF files were obtained as an output of the Torrent Variant Caller. The sample IDs in the vcf header column were changed to ensure uniformity in the downstream analysis. These files were indexed and merged using the respective commands bcftools index and bcftools merge from the package bcftools v1.10.2 [71]. The merging was done in order to create a treated condition and an untreated control pair file. The vcf2maf.pl perl script was used with -remap-chain, vcf-tumor-id, vcf-normal-id, -tumor-id and -normal-id options to obtain the Mutation Annotation Format or MAF files. The -remap-chain option allowed us to remap variants from hg19 to GRCh37 assembly. This was important to successfully run the variant effect predictor (VEP) v102.0 for annotating variants [72]. The id options helped distinguish which sample out of the pair obtained in the previous step was the control. The preprocessing pipeline till this point was automated in Python 3.7.9. The maf files were further analyzed using the R package maftools v2.6.05 [73]. The analyses were carried out after normalizing the variants called in the treated cancer cells against the untreated cancer cells used as the control. This allowed us to focus on only those variants that emerged in the cancer cells post-stress treatment. Sigprofiler was used to report Indel types across all samples [74]. TCGA Analyses TCGA Firehose Legacy bulk RNA-seq Expression profiles were downloaded from cBioportal for Lung Adenocarcinoma (LUAD) with~500 patient samples per dataset. Patient populations were segregated based on their smoking status. Six ordinal categories represented the following meta-data: (1) lifelong non-smoker, (2) current smoker, (3) current reformed smoker for ≥15 years, (4) current reformed smoker for ≤15 years, (5) current reformed smoker (duration not specified) and (6) smoking history not documented. We carried out the entire analysis, which follows below, using two categories: (1) lifelong non-smokers and (2) current smokers. Tumor Mutational Burden (TMB) Analysis The maf format files were segregated into smokers and non-smokers. The tumor mutational burden was calculated using the maftools package in R. To assess the feature importance, we used two metrics: (1) "increase in Mean Squared Error" or IncMSE, and (2) "increase in Node Purity" or IncNodePurity. They were used since the model was trained using the Random Forest Regressor in the R package randomForest. Seven features were included for this analysis: smoking status, age, gender, metastasis state, AJCC staging, AJCC pathology and AJCC nodes. Immune Cell Abundance Analysis Tumor deconvolution or immune cell abundance analysis was carried out using CIBERSORTx [75]. LM22 was used as the signature matrix, and B-mode batch correction (bulk mode) was applied. Quantile normalization was disabled. The analyses were run for 100 permutations, each with an absolute mode. Immune cell fraction distributions were compared across patients with different smoking statuses. The non-parametric Wilcoxon rank sum test was used to check for statistical significance. Multiple hypothesis tests were carried out using the false discovery rate (FDR) < 0.05. TIDEPY Analysis The python package TIDEPY (https://github.com/jingxinfu/TIDEpy, accessed on 12 May 2021) [76] was used to calculate the tumor immune dysfunction and exclusion for two groups, smokers and non-smokers, with lung adenocarcinoma (same dataset as mentioned above). Normalization was carried out using log2(x + 1) transformation followed by average subtraction across all samples. Immune Checkpoint Analysis Eight immune checkpoint genes were included in the analysis: PD-1, PD-L1, PD-L2, B7-1, B7-2, CTLA-4, IDO-1 and CD200R1, based on the two recent studies [77,78] highlighting the most responsive ICGs in lung adenocarcinomas as compared to normal tissues. The log(TPM) values were extracted for patients with confirmed status of being either smokers or nonsmokers, and Z-scores were calculated for each gene: Z-score = log(TPM) GeneX, PatientX − Mean − log(TPM) GeneX Standard-deviation − log(TPM) GeneX Z-score ranges from −1 to +1. A negative value indicates downregulation, and a positive value indicates upregulation. Autophagy Modeling Analysis A list of 370 genes involved in autophagy was curated from Fang et al. [24]. To fish out the autophagy genes that were highly predictive of smoking status based on their differential expression, we used a random forest regression model approach. For this, we used GeneSrF (varSelRF) [79], a python-based utility, to predict the top autophagy genes. We also applied our own random forest model testing using the R package randomForest with two hyperparameter values, ntree = 1500 and mtry = 19, obtained such that the Out-of-bag error rate was minimized to 22%. We used three performance metrics for our model: accuracy, precision and recall. Feature importance was calculated using MeanDecreaseAccuracy and MeanDecreaseGini. Autophagy gene expression distributions were compared using the Z-score as described above.
2022-06-23T15:13:20.505Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "4f8452c8a2bf4d40b8bf7b838a0ef94a96f0b6ce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/12/6848/pdf?version=1655807326", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "133312516faa09779b320aa8650013aceccc2756", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
92435471
pes2o/s2orc
v3-fos-license
Innovation Fungi, Toxins Foundation in Maize Grains Collected from Various Iraqi Marketplaces Maize stays inimitable of uppermost cornflakes used global by way of infrequent considerable on behalf of footing of unrelated products, like tortillas, cash, bars, cookies, pizzas. Substantial loss in maize production stands infection by dint of Mycological pollution. Determination of examination stayed toward separate besides executive aflatoxin equal fashionable maize seeds examples. Mycological contamination initiate fashionable completely stowed examples poised commencing selected families fashionable Iraqi government. Mycotoxin formation via cloistered fungi consequently dignified by thin layer chromatography method. seven diverse molds inaccessible as of 88 maize examples inspected aimed at Mycological pollution recognized .Many mycotoxins remained empirical fashionable recent pursuit excluding aflatoxin B1 besides fumonisin B1 . INTRODUCTION Maize (Zea mays) cornflakes produce refined lengthily concluded biosphere consumes maximum making amongst completely cornflakes (Kogbe and Adediran, 2003). Maize remains domineering cornflakes by way of substance of regime besides nutrition. Frequent molds thoughtful seeds pathogens fashionable turf otherwise storage, manufacturing frequent types of annihilation besides mycotoxin effluence (Logrieco et al., 2003;Torres et al., 2006). Fungal kinds usually invent fashionable kept particles be present Fusarium, Aspergillus and Penicillium, plentiful of by capabilities of generating toxins (Christensen, 1987;Lacey, 1989). Development of molds container remain ostentatious through dampness gratified of invention (Giorni et al.,2009;Hell et al., 2000), temperature besides stowage period. indications chief toward vertical of classifying molds types fashionable stowed grain, by unusual discussion aimed at mycotoxin ones, advanced container stand possible risk aimed at persons besides visceral wellbeing. Mycotoxins describe diverse collection of lower molecular mass biochemical composites, that recent biological inactivity, twisted trendy subordinate breakdown via selected fungi types, that regularly drive toward Fusarium, Aspergillus, Penicillium. Subordinate metabolites obligate dangerous action collected individuals, naturally, that container remain organization by way of common pollutants interested in feedstuff besides nutrition handcuffs. Precise toxins container remain carcinogenic (fumonisins, FB, Collection 2B: conceivable cancer causing aimed at individuals), cancer causing and teratogen (ochratoxin A, OTA, Collection 2B), carcinogenic, mutation causing and teratogen (aflatoxin B1, AF, Collection 1; aflatoxin M1, Collection 2B), (Bryden, 2002). Mycotoxins harvest fashionable soil pro harvest, harvest, powerful, besides storage phases, conditions able to cumulative fungus resultant in mycotoxin formation (Candlish et al., 2001). Contamination risk via mycotoxins authoritative food courtesy apprehension aimed at grains besides extra field harvests. Mycotoxins touching mueslis remain cautious toward remain of greater significance biosphere done designed for humanoid beings (Bhat et al., 2000;Bryden, 2007). Supplies and Procedure Examples gathering Whole ninety stowed maize grains collected commencing various Iraqi Marketplaces, grains remained registered, enclosed trendy sterile baggage, enthused toward test center then earmarked in 4°C. In Maize examples Inspection of B1 and G1 Aflatoxins uninvolved as of Maize tasters discussing toward process characterized via (Schuller et al.,1983). 25 gram of all Maize example remained supplementary to (250) ml tapering hipflask containing (25) ml disinfected water then (50) ml chloroform. hipflasks shocked meant for 30 minutes via shaker , suspensions filtered. following chloroform excerpts disinfected execution to (Takeda et al., 1979) , Elutes faded toward aridity on vapor bath. All balance stayed re-dissolved in (1) ml chloroform, Aflatoxins tested on (TLC) sheet , penetrated by silica gel, all sample extract cover aflatoxins stayed loaded in silica gel dishes standard aflatoxins B1, G1. dishes well-known fashionable a glass jar cover chloroform-acetone (9:1) v/v by way of evolving in credit. Aflatoxins restrained by way of nominated via (Shannon et al., 1983;FAO,2004). Magnitudes stayed accomplished done fluorescence at (370) nm. Investigation about aflatoxins produced by Aspergillus flavus Method expected in present inspection approved out discussing to (A.O.A.C., 1984). (1) ml aliquot from spore suspension (10-6) used to infect (250) ml tapering flask cover (100)ml from Medium (Czapek's Doxtros Agar), flask hatched to ten days in dark at 30°C. in end of gestation Contamination occurs over insignificant quantities of microbes contaminating particle as working into storing starting harvest in handling, stowage apparatus or as of spores formerly existing in stowage buildings (RRI, 2006). Values shown Aspergillus flavus standard extreme ratio of frequency (87.9) ran by Aspergillus niger, Aspergillus parasiticus (86.5, 79.1 ) correspondingly. Although lowest ratio of frequency standard (6.9 % , 22.8 ) in Mucor, Rhizopus stolonifer correspondingly. Instructions on occurrence besides relative proportion of mycotoxigenic fungi very appreciated and compulsory for added working out on toxin producing fungi besides epidemiological denotation in maize. Frequent kinds of molds crop mycotoxins like Aspergillus, Penicillium are mycotoxigenic molds responsible for typical of mycotoxin contamination (Palumbo et al., 2008) . Aptitude to yield AFB1, AFG1 in Maize Values in(Index1) existing ability to harvest (AFB1) in grains recorded in (63) Several previous detectives obligate designated cornflakes grain in growing as well as grape development characterize diet ecosystems that occupied via mycotoxigenic fungi, that biased via abiotic effects like typical temperature, relative humidity chiefly at a microclimate level besides storing conditions in frequent regions universally ecosphere (Castellari et al., 2010;Magan et al., 2010). Contamination might be due to protracted old-fashioned storing of presented corn in unsuccessful environmental disorder including great moisture besides temperature. Corn placed for extended time are extra vulnerable than again composed corn. Insects and rodents might also be donated to worsening the grains quickly, increasing corn mycoflora through lengthy period storing (Hussein and Brasel, 2001). So, practice of dressed agricultural does would discourage fungal growth besides mycotoxin production would be indispensable to reduction mycotoxin percentage in corn, corn products. Production Aflatoxins by Aspergillus flavus separates Values obtainable in (Index 2) refer to general variation of aflatoxins creation tangible among confirmed separates of Aspergillus flavus. This is residual toward genomic modification aimed at strain, that replicated in quantity, reputation of formation via incurable metabolic paths in place of fungi confirmed variance (Liu et al., 2006). Aflatoxins have uppermost commanding visibly fashionable mycotoxins in cultivated crops. Aflatoxins created via several types as Aspergillus flavus, Aspergillus nomius (Varga and Samson, 2008). Previous instructions documented Aspergillus mycotoxigenic in saved crap, mycotoxins besides aflatoxins in varied applications (Pacinet al., 2009;Moreno et al., 2009). Nearby is a general propensity toward increase consumption of breakfast muesli besides cornflakes. Here is an inclusive preparation consumption of muted grains by mycotoxins is a endangerment for animal , human health, besides can lead to an important commercial. Instruction of the environs clue to fungi growing in storage and production of mycotoxins selected that grain dampness contented is unique of greatest important effects (Giorni et al., 2009).
2019-04-03T13:09:20.668Z
2018-12-30T00:00:00.000
{ "year": 2018, "sha1": "6e8fa880210a716e7cb4041a9d88572a39195680", "oa_license": "CCBY", "oa_url": "https://microbiologyjournal.org/download/32011/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d631bf5f10f33afb224b8a24fce167756314045", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
265550495
pes2o/s2orc
v3-fos-license
Retrospective analysis of 16 cases of lumbar hernia Background Through a retrospective analysis of 16 cases of lumbar hernia, we discussed the anatomical basis, clinical manifestations, diagnosis, and treatment of this rare condition. Methods We collected medical data of 15 patients with a primary lumbar hernia and one patient with a secondary lumbar hernia treated in the General Surgery Department of Wuxi No.2 People's Hospital between January 2008 and June 2021 and analysed their demographic, preoperative, and postoperative data. Results All patients underwent elective surgery performed by the same treatment team for superior lumbar hernias. The median area of the hernia defect was 12 cm2. Fifteen patients underwent sublay repair, and one underwent onlay repair. The median operative time and blood loss were 48 min and 22 mL, respectively. The hernia contents were extraperitoneal fat in 15 patients and partial small intestine in one. The median visual analogue scale score on postoperative day 1 was 3. A postoperative drainage tube was placed in three cases but not used in 13. The median duration of hospital stay was 5 days. Postoperative incision infection occurred in one case. During the follow-up period, no postoperative complications, including haematoma, seroma, incision infection or rupture, recurrence, and chronic pain, occurred in the other 15 cases. Conclusion Lumbar hernias are rare and can be safely and effectively treated by open tension-free repair. Introduction Lumbar hernias are extremely rare and often encountered only once during the career of a hernia surgeon.De Garangeot reported the first case of lumbar hernia in 1731 [1].Lumbar hernia is defined as abdominal organs or retroperitoneal fat protruding between the 12th rib and the iliac crest through the abdominal wall or retroperitoneum.They commonly present as protruding, reversible bulges in the posterior abdominal wall.Owing to its low incidence, clinicians have insufficient experience diagnosing this disease, often leading to misdiagnosis or delayed diagnosis.Therefore, some patients arrive at the hospital's emergency department with incarcerated or strangulated hernias.Abdominal computed tomography (CT) is vital in diagnosing this disease. Due to the risk of incarceration and strangulation, a lumbar hernia should be treated promptly once diagnosed [2].Surgery is the best way to treat this disease; however, there is no consensus on the choice of surgical method.As a result of the extremely low incidence rate, most reports of lumbar hernias can only be case reports or retrospective studies with a few cases [3][4][5][6].We also retrospectively studied 16 patients with lumbar hernia to explore the anatomic basis, clinical manifestations, diagnosis, and treatment. Materials and methods Fifteen patients with a primary lumbar hernia who underwent sublay repair and one patient with a secondary lumbar hernia who underwent onlay repair between January 2008 and June 2021 were assessed.The patches used in the operation were all 10 × 15 cm lightweight large mesh patches.All the patients presented with unilateral or bilateral reversible masses protruding from the superior lumbar triangle (Fig. 1).The diagnosis was confirmed based on typical clinical manifestations, careful physical examination, and abdominal CT (Fig. 2).All data, including patient age, sex, side of lumbar hernia, previous lumbar trauma or surgery, body mass index (BMI), primary or secondary lumbar hernia, and surgery-related information, were obtained from electronic medical charts.Informed consent was obtained from all the patients.The Wuxi No.2 People's Hospital Ethics Committee approved the scientific research ethics review materials on July 1, 2,019, with acceptance number 2019Y-4. Surgical procedure All surgeries were performed under general anaesthesia by the same treatment team.The patients were placed in the lateral decubitus position to provide a better view of the lumbar region.The surface of the reversible mass was selected, and an incision was made along the skin texture, approximately 6-8 cm in length.The skin and subcutaneous tissues were cut layer-by-layer until the hernia sac and orifice were reached (Fig. 3).Care was taken to protect the contents of the hernia during separation, especially when separating the adhesions between the hernial sac and orifice.In the only case of secondary lumbar hernia in this study, due to the previous trauma and splenectomy, the local tissue adhesion was severe, and the preperitoneal space could not dissociate; finally, onlay repair was selected, and the other 15 patients were treated using the sublay technique.The typical peritoneal space is relatively loose.We could use our fingers and wet gauze to separate gently with a separation range of at least 3 cm beyond the edge of the hernial orifice.The patch was then cut according to the size of the hernial orifice, placed in the preperitoneal space, and placed flat.Finally, the patch, surrounding muscle, and fascial tissues were fixed using absorbable sutures.In cases where no adjacent muscle remains because of attenuation, creating muscle flaps, according to Vagholkar et al. [7], was also a good choice to enhance the repair effect.Considering the pain and inconvenience caused by the placement of drainage after surgery, we only placed drainage in three patients. Peri-operative evaluation and follow-up Demographic data, including sex, age, BMI, previous lumbar trauma or surgery, primary or secondary lumbar hernia, history of chronic obstructive pulmonary disease (COPD) and/or constipation, side of lumbar hernia, and American Society of Anaesthesiologists (ASA) score were collected.Surgery-related information included the size of the abdominal wall defect, hernia contents, operative time, blood loss, postoperative drainage, wound infection, postoperative hospital days, and the visual analogue scale (VAS) score on postoperative day 1.All patients were followed up by telephone calls and outpatient clinic visits, and the last follow-up was conducted in June 2022. Results Demographic data are shown in Table 1, and surgery-related information is presented in Table 2. Fifteen patients with primary lumbar hernia and one with secondary lumbar hernia were included in the study, including six (38 %) males and 10 (62 %) females.The patients were aged 31-81 years (median, 54 years), with a median BMI of 20.4 kg/m 2 (range, 16.3-26.7kg/m 2 ).One (6 %) patient had a secondary lumbar hernia due to previous trauma and splenectomy, while the other 15 (94 %) patients had primary lumbar hernias.Five (31 %) patients had a history of COPD and/or constipation (two patients with COPD, two cases with constipation, and one with both).The entire cohort consisted of nine (56 %), six (38 %), and one (6 %) patients with left, right, and bilateral lumbar hernias, respectively.Twelve (75 %) patients had an ASA score of I, and four (25 %) had an ASA score of II. Anatomical basis and aetiology Lumbar hernias account for <2 % of all external abdominal hernias.According to the aetiology, it can be divided into congenital (20 %) and acquired (80 %) lumbar hernias; the latter includes primary (55 %) and secondary (25 %) lumbar hernia [8,9].The hernia generally protrudes through two anatomical constants, the inferior and the superior lumbar triangles, described by Petit and Grynfeltt in 1783 and 1866, respectively [10]. The lumbar region lies between the 12th rib and iliac crest, bordered medially by the mass of the erector spinae muscles.It includes muscular and aponeurotic planes. From superficial to deep, the first superficial plane is formed by the posterior parts of the external oblique and latissimus dorsi muscles.The second plane comprises the internal oblique muscle and posteroinferior serratus muscle posteriorly; the third plane is formed by the transversalis muscle and its aponeurosis and the block of the medial spinal muscles.Finally, the fourth deep plane is formed by the quadratus lumborum muscle, whose anterior aspect inserts into the lumbar bundle of the diaphragm [11]. Lumbar hernias can be classified based on their location and aetiology [12].According to the anatomical location of the defect, they are divided into Grynfeltt (superior triangle) and Petit (inferior triangle) hernias (Fig. 4).However, blunt abdominal trauma may also BMI, Body Mass Index; COPD, chronic obstructive pulmonary disease. create lumbar hernia, which is classified as the "diffuse" type and is not confined to these two triangles [13,14].The Grynfeltt superior lumbar triangle (or quadrilateral) is located at the 2nd muscular plane.The superior lumbar triangle is located at the lower margin of the 12th rib; the inner lower boundary of the triangle is the lateral border of the erector spinae, the outer lower boundary is the posterior margin of the internal oblique muscle, and the inner upper boundary is the posteroinferior serratus muscle.Sometimes, the posteroinferior serratus and the internal oblique muscles do not contact the attachment point on the 12th rib, and the lower margin of the 12th rib is also involved in forming a side, forming an unequal quadrilateral space.Its deep surface is the aponeurosis at the beginning of the transverse abdominis.The inferior lumbar triangle lies outside and below the superior lumbar triangle at the level of the 1st muscular plane.It is formed by the iliac crest, posterior margin of the external oblique of the abdomen, and anterior and inferior margins of the latissimus dorsi.The deep layer is the internal oblique abdominal muscle.The superior and inferior lumbar triangles are the weak areas of the posterior and posterior-lateral abdominal walls, respectively.Owing to the lack of muscle protection, the abdominal organs can protrude into the abdominal wall through these two triangles to form a lumbar hernia.Since the superior lumbar triangle is larger in area than the inferior lumbar triangle and the deep surface is weaker, the superior triangle is the most common site of lumbar hernia [15].We also think most people have stronger muscles on the right side of their bodies than on the left.Therefore, lumbar hernias are more often found on the left side and in the superior lumbar triangle [16,17].In the present study, we also found that the majority of lumbar hernias were located in the left and superior triangles.Moreover, bilateral lumbar hernias are even less frequently documented, and most reports are case reports [18,19] Our results showed one patient with a bilateral lumbar hernia.Lumbar hernias most often contain extraperitoneal fat; however, they may also include the colon, small intestine, and spleen.Our results are consistent with previous findings. Trauma, infection, and surgery are important causes of secondary lumbar hernia.12The secondary lumbar hernia underwent splenectomy due to trauma, considered an important cause of lumbar hernia formation.Primary lumbar hernias typically have no obvious cause.Increased abdominal pressure may also contribute to the development of lumbar hernias.Five patients with a long history of COPD and/or constipation were included in this study.Moreover, due to various reasons, waist muscle atrophy may be an important factor in developing this disease.Possible causes of congenital lumbar hernia include somatic cell mutations caused by transient hypoxia, embryological defects, local nerve apraxia, spina bifida nerve compression, and external compression caused by an intraperitoneal mass [20]. Clinical manifestations and diagnosis Lumbar hernias present as a protruding reversible bulge in the posterior abdominal wall that increases with increased abdominal pressure, such as cough and constipation.As the duration of diagnosis increased, the bulge volume also increased. Most patients are asymptomatic, and only a small number present with flank, back, or abdominal pain/discomfort.However, the above symptoms are atypical, and the location of the disease is relatively hidden, especially in some obese patients, often leading to diagnostic difficulties.It has been reported that 9-24 % of patients with lumbar hernia visit the hospital because of intestinal obstruction [21]. The diagnosis of this disease depends mainly on the clinical manifestations, careful physical examination, and abdominal CT scans.In the absence of incarceration, a protruding mass can return to the abdominal cavity, whereas lipomas, abscesses, haematomas, and kidney tumours cannot.Abdominal CT is critical for the diagnosis of this disease.It can accurately diagnose hernias and clearly reveal the surrounding anatomical structures and contents, excluding the possibility of tumours and other pathological conditions [22].Preoperative abdominal CT examinations were completed in the 16 patients to diagnose the disease accurately.Moreover, the hernia contents and the size and location of the hernia orifice were determined preoperatively, providing reliable guidance for surgical safety.VAS, visual analogue scale; POD1, post-operative day 1. Treatment Since lumbar hernias can cause discomfort and even incarceration, immediate treatment is recommended once diagnosed.Surgery remains the most effective treatment for this disease.However, surgery is not recommended for patients who do not have a strong desire for surgery or those who cannot tolerate anaesthesia.Owing to its low incidence, the choice of surgical method remains controversial.Surgical methods primarily include laparoscopic and open surgeries.In recent years, some cases of laparoscopic repair of lumbar hernias have been reported [9,19,23].The advantages of laparoscopic surgery include fuller exposure of the hernia orifice, a small incision, reduced postoperative pain, and quick postoperative recovery.However, it also has disadvantages, including potential damage to the abdominal organs, complex surgery, a long learning curve, and only a few experienced large hernia centres, which limits the technique's popularity.Open repair is currently the most commonly used technique for treating lumbar hernias [16]. Open surgery consists mainly of the traditional Dowd, Sublay, Onlay, and 'sandwich' (Onlay + Sublay) techniques, the latter three being tension-free repairs.Dowd surgery requires self-tissue repair, resulting in large surgical trauma, high local tissue tone, muscle flap prone to ischaemic necrosis, and high recurrence rate, and has been rarely used [24].Sublay technique, also known as the Rives-Stoppa technique, repairs the defect through the retromuscular or preperitoneal space.Its safety and effectiveness have been confirmed by relevant studies, [25] and this technology is currently the most widely used in clinical practice [26].The Onlay technique is used to repair defects through the premuscular space and is less effective than the sublay technique because it lacks the tension provided by the muscles and fascia.In our study, 15 patients underwent sublay repair, and one underwent onlay repair.The tissue space of patients with a primary hernia is loose and easy to separate, and the preperitoneal space can be relatively easily established.However, for patients with a history of surgery or trauma, local tissue adhesion is severe, it is difficult to enter the anterior peritoneal space, and blind separation can easily damage the abdominal organs; therefore, we chose onlay repair.We attempted to separate the premuscular space as much as possible to reduce the postoperative recurrence rate.We believe that the larger the patch area, the better the reinforcement.Sandwich repair may improve the repair effects and reduce postoperative recurrence in patients with extreme emaciation and/or back muscle weakness.Hernial orifice size is an essential factor that should be considered during repair.According to Loukas's classification, [27] lumbar hernias are classified into four types based on the size of the defect area: type I, <5 cm 2 , type II, 5-15 cm 2 , type III, >15 cm 2 , and type 0, no triangle is formed.In our study, 11 patients had type II hernias, and five had type III hernias.The 10 × 15 cm lightweight mesh patch can meet the above repair requirements after proper cutting, and the cutting principle is at least 3 cm beyond the edge of the hernia orifice.In addition, we believe that, depending on the size of the hernia orifice, nonabsorbent sutures can be used to reduce or close it without tension to enhance the repair effect.In our study, one incision infection occurred, possibly related to the patient's diabetes and the absence of postoperative drainage. There are still many limitations to our study, although there were no patients with recurrence during the follow-up period, which may be due to the small number of cases or short follow-up time.In addition, there were no cases of congenital or inferior lumbar hernias.Moreover, our study was retrospective rather than a randomised controlled trial with a large case-control study.Future studies with larger sample sizes are required to explore this disease's diagnostic and treatment outcomes. Conclusion Lumbar hernias are rare, with reversible masses in the posterior abdominal wall as the primary clinical manifestations.Abdominal CT can be used to diagnose the disease accurately.The selection of the surgical method should be determined according to the specific patient's conditions.Open-tension-free repair is a safe and effective treatment approach. Fig. 1 . Fig. 1.A reversible mass in the right superior lumbar triangle. Table 1 Demographic and clinical characteristics of lumbar hernia (n = 16).
2023-12-04T05:05:08.292Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "e9a2219953da8cd14441e21b37dd3382043b9cfa", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e22235", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9a2219953da8cd14441e21b37dd3382043b9cfa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55902159
pes2o/s2orc
v3-fos-license
EXAMPLE OF FLOW MODELLING CHARACTERISTICS IN DIESEL ENGINE NOZZLE Modern transport is still based on vehicles powered by internal combustion engines. Due to stricter ecological requirements, the designers of engines are continually challenged to develop more environmentally friendly engines with the same power and performance. Unfortunately, there are not any significant novelties and innovations available at present which could significantly change the current direction of the development of this type of propulsion machines. That is why the existing ones should be continually developed and improved or optimized their performance. By optimizing, we tend to minimize fuel consumption and lower exhaust emissions in order to meet the norms defined by standards (i.e. Euro standards). Those propulsion engines are actually developed to such extent that our current thinking will not be able to change their basic functionality, but possible opportunities for improvement, especially the improvement of individual components, could be introduced. The latter is possible by computational fluid dynamics (CFD) which can relatively quickly and inexpensively produce calculations prior to prototyping and implementation of accurate measurements on the prototype. This is especially useful in early stages of development or at optimization of dimensional small parts of the object where the physical execution of measurements is impossible or very difficult. With advances of computational fluid dynamics, the studies on the nozzles and outlet channel injectors have been relieved. Recently, the observation and better understanding of the flow in nozzles at large pressure and high velocity is recently being possible.This is very important because the injection process, especially the dispersion of jet fuel, is crucial for the combustion process in the cylinder and consequently for the composition of exhaust gases.And finally, the chemical composition of the fuel has a strong impact on the formation of dangerous emissions, too.The research presents the influence of various volume mesh types on flow characteristics inside a fuel injector nozzle.Our work is based upon the creating of two meshes in the CFD software package.Each of them was used two times.First, a time-dependent mass flow rate was defined at the inlet region and pressure was defined at the outlet.The same mesh was later used to perform a simulation with a defined needle lift curve (and hereby the mesh movement) and inlet and outlet pressure.In next few steps we investigated which approach offered better results and would thus be most suitable for engineering usage. INTRODUCTION The development of modern diesel engines is directed to increase capacity and lower consumption.In future, it will be especially oriented towards even greater fuel economy and purity of diesel engines.Therefore, more and more manufacturers tend to develop engines with smaller volumes, less cylinders and different systems for exhaust gas treatment. In achieving these goals, fuel injection systems play an important role, since they are responsible for just-in-time and regular supply of fuel to the engine cylinders.In the course of time, there were several changes in their operation but the basic characteristics remain the same until today.Increased awareness for the environmental protection compel manufacturers to develop ever better and more efficient fuel injection systems.The latest are electronically controlled and allow precise control by opening and closing of valves, fuel injection time is shorter while injection pressure is significantly higher.Electronically controlled injection systems help to reduce harmful emissions (NOX, soot) in the exhaust gases and to increase the engine power as well as reduce the level of noise. Such systems allow the injection under high pressure (about 1500 to 2000 bars) which reduces the emissions of solid particles.The higher the pressure, the better the dispersion (smaller droplets) of the fuel is, which leads to better prepared mixtures at the same time.By controlling the injection pressure that depend on the load and engine frequency, these systems allow control of gaseous emissions and noise.For simultaneous reduction of NOX and soot emissions, the optimal angle of starting injection time is important.The latter is important due to the interaction of different measures to reduce emissions of soot and NOX.In turn, by reducing certain emissions, these measures often cause the increase of others.The injection with common rail allows all the above requirements [1].The space, in which combustion takes place, and the system of fuel supply are connected by the injector nozzle, which is one of the most important elements of the fuel injection system.It is used at the end of the compression phase to enable the supply of fuel under high pressure to the combustion chamber.Injector nozzles take care of properly atomized fuel, which is essential for good combustion, low fuel consumption and the lowest emissions possible.Individual values of by-products of combustion also depend on pressure of fuel injection, openness of nozzles and valves, fuel characteristics and steering components [1]. Due to increased efficiency of the process, several different versions of nozzles have been developed.Their common task is to inject fuel into the cylinder of engine at optimum dispersion [2]. Computer fluid dynamics (CFD) Computers have become an indispensable part of modern engineering practice.By using computers, we can develop, design and improve old products faster.In the sixties, the development of computer fluid dynamics has started.Its main advantages, compared to conventional laboratory experiments, are the speed of implementation, easy adaptability and lower price.Consequently, many prototypes have not been required due to simulation which can figure out whether something is going to work or not and can be improved by the use of computer.For the purposes of CFD, there are several different software packages.In our case, the program, which is widely used in the automotive industry, was used for numerical simulation of our problem.The program is based on the finite volume method to analyze fluid flow [2]. Mathematical model of multiphase fluid flow in the selected CFD package The object of our research is the numerical analysis of simultaneous flow of two fluid phases (vapor and liquid) through the injector of the fuel injection system.In order to solve such a mathematical-physical model, it is necessary to solve a system of conservation equations for each liquid phase separately. The multiphase model describes each of the phases separately.The conservation equations for each phase are connected with terms that describe the transfer of mass, momentum, energy, turbulent kinetic energy and dissipation of turbulent kinetic energy.These terms are the weakest point of the multiphase model.In the Eulerian multiphase flow model, different equations are separately numerically solved for each of the two phases (k and l) in the model [3]. Here: kdensity of phase k, kvolume fraction of phase k, vkvelocity of phase k, klrepresents the interfacial mass exchange between phases k and l. Here the following condition must be fulfilled: Momentum conservation: Here: f -body force vector which comprises gravity g and the inertial force in rotational frame,pressure (values of pressure are supposed to be equal for all phases), kl Mterm which presents the momentum interfacial interaction between phases k in l.The shear stress of the k phase is: Reynolds stress is: (5) Here: molecular viscosity,turbulent viscosity Example of flow modelling characteristics in diesel engine nozzle 127. Turbulent viscosity is modelled by: Energy (total enthalpy) conservation: Here: heat (entalpy) source, represents the exchange of enthalpy between phases k and l;enthalpy of phase k; qkheat flux. Heath flux qk is defined by: Here: Turbulent kinetic energy conservation: The production term due to shear, Pk, for phase k is: Specifying mass transfer (mass interfacial exchange): the linear cavitation model was used.It based on the following relation for the mass exchange: Where:mass transfer, N'''bubble number density, Rradius of bubbles. Data for numerical calculation Entry data for the calculation were obtained by measurements which were carried out in laboratory of engines on Friedmann & Maier device for testing outside vehicle injection systems.By using this device, data about the fuel mass flow and needle lift were collected and were used as boundary conditions in the simulation.They are presented in Figure 2. Model mesh The simulation was carried out on the geometric model of the extreme lower part of the injector, which covers the tip of the needle, seat area and bore for fuel injection (flow out channel).The geometrical model was formed from the 2D structure in Figure 4 (left).For the purpose of numerical simulations, there were created two spatial meshes on the model, consisting of 250.000 (mesh 1) and 400.000 (mesh 2) elements.At each mesh there were conducted two simulations: the one with defined mass flow and the one with defined needle lift (Figure 4, right). Boundary conditions of the model The basic surfaces (selections) of the model are: inlet, outlet and symmetry.For each of them boundary conditions for multiphase flow were set.Furthermore, the parameters for controlling the calculation, defining characteristics of the fuel, and convergence criteria and parameters for postprocessing were determined. In case of calculating the defined needle movement, the type of simulation "crank-angle" was used, which allows the change of conditions depending on crank angle.Therefore, for the performance of these simulations, besides the three already defined (Figure 5, left), 4 new selections were created: Needle_move, Buffer, Interpolation and No_move (Figure 5, right).Fig. 5. Selections of boundary conditions (left), operational tree, additional selections of boundary conditions in case of defined needle movement (right) ANALYZING RESEARCH RESULTS The review of results included the analysis of velocity, turbulent kinetic energy and volume fraction in the outlet.Simulated values of the flow characteristics for both of the meshes and for both approaches at the flow inlet and outlet are introduced below.Examples illustrate the results after 0.001125 s of simulation (about half of the process). Velocity The comparability of meshes for both phases is relatively good in profile shape terms, but is very different in absolute terms.As shown in the graphs, the calculated values for the two phases are very similar at both the inlet and the outlet.However, the difference between both approaches is substantial.The results match better at the inlet to the bore.The values obtained with the defined needle movement seem more realistic. Turbulence kinetic energy The comparison of results between the two meshes shows that the results of TKE are relatively well matched, and that there are only noticeable differences along the walls, while after that the flows subside.Slightly larger deviation is observed in the gaseous phase at the exit of the hole which is confirmed by the graphs of numerical values.The mesh density seems to have made a large impact on both approaches. Volume fraction The volume fractions match well at the inlet, but some differences can be observed at the outlet of the bore.This means that the gaseous phase moved closer to the center of the bore. CONCLUSIONS In the field of computational fluid dynamics, there is still quite a lot of potential for improvement and innovation.Therefore, our research presents two different modes of injector nozzle simulation and we try to determine whether they are comparable or not.The first approach dealt with the defined time-dependent mass flow at the entrance to the injector, and the data were obtained experimentally.In the second case, the modelled calculations were replicated with the defined needle lift and the pressure at the inlet.Both were used at two different mesh volumes of numerical model. It turned out that the approaches gave different results.The particularly large deviations occured in the calculated values of pressure and velocity.Much more comparable results were gained by turbulent kinetic energy and volume fractions. It was also noted that the discrepancy was greater at the exit of the bore.In both approaches, partial reasons for the deviations resulted from in the inability of choice of exactly the same time interval, although measurement error was also possible.However, those factors could not be the cause of such large deviations in the case of pressure, especially because in the post-analysis, the possibility of incorrect settings in the control file was excluded. Nevertheless, we managed to approximate the results in two ways.First, the inlet pressures were taken out of the results of simulation which was using the approach with the defined mass flow.This information was then set as a boundary condition in simulation, in which we defined the needle lift and pressure at the inlet.The match was much better.Therefore, we believed that with some adjustments of incoming data, especially with more accurately defined mass flow, we could achieve a very good match.Then we did the reverse, and from the simulation with the defined needle lift, the mass flows were received and were determined at the inlet.There was again a very good match in results. Despite that, in the initial research phase, we could not confirm whether the approach was entirely comparable or not.We estimated that it would make sense to re-do the measurement of mass flow and needle lift, and after that simulations should be replicated with much more accurate mathematical analysis of physical conditions of the process, which would be subject to further consideration of the problem. Fig. 2 . Fig. 2. Data about the fuel mass flow (left) and needle movement (right) Fig. 6 . Fig. 6.Defined mass flow: velocities of gaseous phase for mesh 1 (left) and mesh 2 (right) in m/s
2018-12-11T08:52:07.598Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "16f2c526d87d3114a221fe4305b037703edd929d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20858/sjsutst.2016.90.11", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "16f2c526d87d3114a221fe4305b037703edd929d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
202562787
pes2o/s2orc
v3-fos-license
SNP2APA: a database for evaluating effects of genetic variants on alternative polyadenylation in human cancers Abstract Alternative polyadenylation (APA) is an important post-transcriptional regulation that recognizes different polyadenylation signals (PASs), resulting in transcripts with different 3′ untranslated regions, thereby influencing a series of biological processes and functions. Recent studies have revealed that some single nucleotide polymorphisms (SNPs) could contribute to tumorigenesis and development through dysregulating APA. However, the associations between SNPs and APA in human cancers remain largely unknown. Here, using genotype and APA data of 9082 samples from The Cancer Genome Atlas (TCGA) and The Cancer 3′UTR Altas (TC3A), we systematically identified SNPs affecting APA events across 32 cancer types and defined them as APA quantitative trait loci (apaQTLs). As a result, a total of 467 942 cis-apaQTLs and 30 721 trans-apaQTLs were identified. By integrating apaQTLs with survival and genome-wide association studies (GWAS) data, we further identified 2154 apaQTLs associated with patient survival time and 151 342 apaQTLs located in GWAS loci. In addition, we designed an online tool to predict the effects of SNPs on PASs by utilizing PAS motif prediction tool. Finally, we developed SNP2APA, a user-friendly and intuitive database (http://gong_lab.hzau.edu.cn/SNP2APA/) for data browsing, searching, and downloading. SNP2APA will significantly improve our understanding of genetic variants and APA in human cancers. INTRODUCTION Alternative polyadenylation (APA) is a widespread phenomenon that generates transcript isoforms with different lengths of 3 untranslated regions (3 UTR) by recognizing different polyadenylation signals (PASs) (1). More than 70% of human genes have multiple polyadenylation sites (2). As a common post-transcriptional modification mechanism, APA events may cause the alteration of important regulatory elements, such as miRNA binding sites and RNA protein binding sites, thus impacting the stability, localization and translation rate of mRNAs (3). APA modulation has been investigated in cells, tissues and different diseases. Previous studies have shown that APA often functions in a tissue-or cell-specific manner (4,5), and several APA dysregulations have been identified in human diseases (6)(7)(8)(9), including cancers (10). A significant global 3 UTR shortening has been found in cancer cell lines and tumor samples, compared with normal samples (11). Another study pointed out that shortening or lengthening of 3 UTR might lead to a worse prognosis in some cancers. For example, kidney cancer samples with the shorter isoforms TMCO7 and PLXDC2 were found to have lower survival rates (12). However, research on the APA role and APA regulation in cancer is still at an early stage. As the most common genetic variant, single nucleotide polymorphisms (SNPs) are major contributors to the differences in human disease susceptibility (13). Genome-wide association studies (GWAS) have identified thousands of SNPs associated with complex traits and diseases. Currently, most studies of the disease/trait-related SNPs remain at statistical level, and the biological mechanism underlying them is still largely unknown (14). Quantitative trait locus (QTL) mapping, such as eQTL and meQTL analysis, is a method used to evaluate the effects of genetic variants on intermediate molecular phenotypes, and has been demonstrated as a powerful tool to decipher the function of SNPs and prioritize genetic variants within GWAS loci (15)(16)(17)(18)(19). Recent studies have confirmed the associations between several APA quantitative trait loci (apaQTLs) and cancer. For example, the presence of a SNP in a canonical PAS within TP53 (AATAAA to AATACA) has been found to be highly associated with the processing of the impaired 3 end of TP53 transcripts and increase the susceptibility to cancers including cutaneous basal cell carcinoma, prostate cancer, glioma and colorectal adenoma (20). However, largescale genome-wide analyses of apaQTL have rarely been reported, and no database for apaQTLs in cancer is available. Recently, Feng et al. have used Percentage of Distal polyA site Usage Index (PDUI) to quantify APA events for 10,537 tumor samples across TCGA 32 cancer types (21). Therefore, it is feasible to add APA as an additional dimension to the existing cancer genomic analysis. In this study, by using the genotype and PDUI data, we developed a new computational pipeline to systematically perform apaQTL analyses across 32 cancer types. We further identified apaQTLs associated with patient overall survival time and apaQTLs located in GWAS linkage disequilibrium (LD) regions. The SNP2APA database (http://gong lab.hzau.edu.cn/SNP2APA/) was constructed for browsing, searching and downloading the apaQTL data. Collection and processing of genotype data We downloaded the genotype data across 32 cancer types from the TCGA data portal (https://portal.gdc.cancer.gov/) (22), which contained 898,620 SNPs called by Affymetrix SNP 6.0 array. We extracted 9082 samples with both genotype data and APA data available ( Figure 1A). To increase the power for apaQTL discovery, IMPUTE2 was used to impute autosomal variants of all samples in each cancer type with haplotypes of 1000 Genomes Phase 3 as the reference panel (23,24). After imputation, SNPs of each cancer type were selected in terms of the following criteria (25): (i) imputation confidence score, INFO ≥0.4, (ii) minor allele frequency (MAF) ≥5%, (iii) SNP missing rate <5% for best-guessed genotypes at posterior probability ≥0.9 and (iv) Hardy-Weinberg equilibrium P-value > 1 × 10 −6 estimated by Hardy-Weinberg R package (26). Collection and processing of data for APA events To quantify dynamic APA events, we used the PDUI value as the indicator and downloaded them from The TC3A Data Portal (http://tc3a.org/) for 32 cancer types ( Figure 1B) (21). PDUI value was a novel, intuitive ratio for quantifying APA events based on RNA-Seq data (12). PDUI was calculated by the number of transcripts with distal polyA site divided by the total number of transcripts with both distal and proximal polyA sites. The greater PDUI represented the more transcripts using the distal polyA site, and vice versa. For example, value 1 indicated that all transcripts of the gene used the distal polyA site, while value 0 indicated that all transcripts of the gene used the proximal polyA site. For each cancer type, APA events were selected as follows: (i) the missing rate of PDUI data <0.1, (ii) the standard deviation of PDUI > 5%. After filtering, an average of 4143 APA events per cancer type were included for the further analyses. To minimize the effects of outliers on the regression scores, the PDUI values of each gene across all samples were transformed into a standard normal based on rank (25). Obtaining of covariates To improve the sensitivity in QTL analyses, we collected several known and unknown confounders as covariates for apaQTL analysis (25). We first used the smartpca in the EIGENSTRAT program (27) to perform principal component analysis (PCA) of the genotype data for each cancer type. The top five principal components in genotype data were included as covariates for correcting the ethnicity differences. We additionally used PEER software (28) to analyse the APA data and obtained the first 15 PEER factors as covariates which were used for eliminating the possible batch effects and other confounders. Finally, other common confounders such as gender, age and tumor stage (25,29,30), were also included as covariates for apaQTL analysis. Identification of cis-and trans-apaQTL using MatrixEQTL For each cancer type, we evaluated pairwise associations between autosomal SNPs and APA events through linear regression by MatrixEQTL (31), a software for efficient QTL analysis. The SNP locations (hg19) were downloaded from dbSNP database (https://www.ncbi.nlm.nih. gov/projects/SNP) and distal PAS locations were extracted from the APA datasets. The SNPs with false discovery rates (FDRs) <0.05 calculated by MatrixEQTL and the absolute value of correlation coefficient (r) ≥0.3 were defined as apaQTLs ( Figure 1C). Of them, we further defined the apaQTLs within 1 Mb from the distal PAS as the cis-apaQTLs (25), while defined the apaQTLs beyond that region or on another chromosome as the trans-apaQTLs. Identification of survival-associated apaQTLs To prioritize promising apaQTLs, we further examined the association between apaQTLs and patient survival time. The clinical data including survival time of patient were downloaded from TCGA data portal. For each apaQTL, the samples were divided into three groups by genotypes: homozygous genotype (AA), heterozygous genotype (Aa), and homozygous genotype (aa). Then the log-rank test was performed to examine the differences in survival time, and Kaplan-Meier (KM) curves were plotted for intuitive visualization of the survival time for each group. Finally, apaQTLs with FDR <0.05 were designated as survivalassociated apaQTLs. Identification of GWAS-associated apaQTLs GWAS has been successfully used for identifying thousands of disease susceptibility loci, but it remains a challenge to pinpoint the causal variants and decipher their underlying mechanisms. To facilitate the interpretation of GWAS results, we integrated apaQTLs with the existing GWAS risk loci to explore trait/disease-associated apaQTLs. We downloaded all the risk tag SNPs identified in GWAS studies from GWAS catalog (http://www.ebi.ac.uk/gwas, accessed September 2018) (32). Then the SNPs in linkage disequilibrium (LD) regions with GWAS tag SNPs were extracted from SNAP (https://personal.broadinstitute.org/plin/snap/ ldsearch.php) (33). The parameters were set as follows: (i) SNP dataset: 1000 Genomes, (ii) r 2 (the square of the Pearson correlation coefficient of LD) threshold: 0.5, (iii) population panel: CEU (Utah residents with northern and western European ancestry), (iv) distance limit: 500 kb. Finally, we defined apaQTLs that overlapped with these GWAS tag SNPs and LD SNPs as GWAS-associated apaQTLs. DATABASE CONSTRUCTION AND CONTENT All results mentioned above were stored into MongoDB database (version 3.4.2) in the form of relation tables. A user-friendly web interface, SNP2APA (http://gong lab. hzau.edu.cn/SNP2APA/), was constructed to support data browsing, searching, downloading and PAS online prediction ( Figure 1D and Survival and GWAS associated apaQTLs To prioritize promising apaQTLs, we associated apaQTLs with the survival data of patients downloaded from the TCGA portal. A total of 2154 apaQTLs associated with overall survival time across 32 cancer types at FDR < 0.05, were identified and included in SNP2APA. For example, we found that rs10247994 was highly associated with patient overall survival time in kidney renal clear cell carcinoma (KIRC) ( Figure 2C). The significant differences in PDUI values among corresponding genotypes of rs10247994 were observed, indicating that this SNP might play an important role in regulating the APA event of PUSH gene in KIRC ( Figure 2C). We further mapped apaQTL results to SNPs in GWAS regions and identified a total of 151 342 apaQTLs overlapping with GWAS LD regions with one or multiple traits. For example, rs2303282, as a risk SNP, was reported to be associated with BRCA (34). In our study, we found that rs370151 was in LD with the rs2303282 (LD r 2 = 0.87) and was highly associated with APA event of AMFR gene. AMFR was reported to encode a tumor motor stimulating protein receptor (35). Thus, it could be inferred that rs370151 might play an important role in breast cancer by affecting APA events ( Figure 2D). THE FUNCTION AND USAGE OF SNP2APA DATABASE SNP2APA provided a user-friendly web interface (http: //gong lab.hzau.edu.cn/SNP2APA/) that enabled users to browse, search, and download four datasets: cis-apaQTLs, trans-apaQTLs, survival-apaQTLs, and GWAS-apaQTLs. In addition, we designed a 'Pancan-apaQTL' page for batch search and visualization. A 'PAS Predict' page was constructed for online predicting whether a SNP could destroy or create the PAS of APA. On the homepage, we provided a quick search option for users. After inputting an interested SNP, gene or APA event, users could obtain the corresponding results presented as four dynamic tables containing the information of cis-apaQTLs, trans-apaQTLs, survival-apaQTLs and GWAS-apaQTLs. By querying the cis/trans-apaQTL page, we could obtain a table containing the information of SNP ID, SNP genomic position, SNP alleles, APA events, gene symbol of APA, APA position, beta value (effect size of SNP on PDUI value), r value and P-value of apaQTL ( Figure 2E). For each record, a vector diagram of the boxplot was embedded to display the association between SNP genotypes and PDUI values. By querying the survival-apaQTL page, the SNP ID, SNP genomic position, SNP alleles, sample size, log-rank test P-value, and median survival time of different genotypes will be displayed. For each record, a vector diagram of the KM-plot was provided for visualizing the association between SNP genotypes and overall survival time. On the 'GWAS-apaQTL' page, the information of the SNP, related APA event, gene symbol of APA and related traits would be available. On the 'PanCan-apaQTL' page, users could submit multiple SNPs or gene symbols of APA events. Then they would obtain two heatmaps displaying the correlation coefficient (r) of cis-apaQTLs and trans-apaQTLs across the cancer types ( Figure 2F). PAS is the most important regulatory element during the regulation of APA events (3). To further explore the impact of SNP on PAS, we developed a web-based tool by utilizing Dragon PolyA Spotter (http://www.cbrc.kaust.edu.sa/ dps/Capture.html) (36) and designed the 'PAS Predict' page. On this page, users could submit a wild-type sequence and the corresponding mutant sequence to predict the effect of SNP on polyadenylation signals (PAS) so as to determine whether SNP could destroy or create the PAS ( Figure 2G). In SNP2APA, four main datasets for each cancer type are freely available from the 'Download' page. The 'Help' page provided the basic information on database, pipeline of database construction, result summary, and contact. SNP2APA was open to any feedback with email address provided at the bottom of the 'Help' page. CONCLUSION AND FUTURE DIRECTIONS We developed SNP2APA as a resource providing comprehensive apaQTLs across 32 cancer types. To the best of our knowledge, this is the first database systematically evaluating the effects of the genetic variants on APA, especially in multiple cancer types with a large sample size. In recent years, increasing studies have suggested that APA is likely to play important roles in cancer. Therefore, it is urgent to add APA as an additional dimension to existing cancer genomic analysis. In this version of TC3A, by using genotype and APA data of 9082 tumor samples, we provided numerous apaQTLs among multiple cancer types and identified abundant apaQTLs associated with patient survival time or located in known GWAS loci. To explore the impact of SNPs on PAS, we also designed an online tool for users to predict functional apaQTLs. The SNP2APA database will greatly facilitate the interpretation of risk SNPs identified in genetic studies. In the future, with the increasing number of RNA-Seq datasets and genotype data from large consortium projects, we will continue to update the SNP2APA database. We believe that our database will be of particular interest to researchers in the field of genetic variants and APA in cancer.
2019-09-13T13:07:25.941Z
2019-09-12T00:00:00.000
{ "year": 2019, "sha1": "9206ecfbaaf80df316553a9d8e87fc378e4d1db2", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/48/D1/D226/31697189/gkz793.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30c5505110766f3dd13816b5e9a86b086faac06d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
81334453
pes2o/s2orc
v3-fos-license
Threshold Concepts in Medical Education This article was migrated. The article was marked as recommended. Background - The theory of Threshold Concepts (TC) proposes that there are ideas necessary for a student to learn which enable them to think like a professional. Studies of TC in higher education have appeared since 2003. Studies in medical education are more recent. Method - We studied TC using a qualitative analysis approach (grounded theory and constant comparison) to produce a thematic analysis of 135 de-identified reflective practice essays from students in the pediatric clerkship at our medical school. Summary of results - Seven themes met our criteria for a threshold concept; transformative (ontological shift) and troublesome (causes angst). 2 TC in our students’ work were identical with those found by authors from the UK (“Medicine isn’t black and white,” and “Sometimes there isn’t a right answer,”) 4 TC were similar and 2 were distinct. Discussion - Our findings suggest that there are some TC inherent (maybe essential) in personal and professional identify formation for a student moving from layperson to physician-hood, regardless of the setting of the medical school. Introduction Threshold concepts (TC) is a theoretical framework in higher education to describe ways of thinking and reasoning that are unique to a profession and enable the learner to "think like" and become a professional.The TC framework was developed by Meyer and Land (Meyer and Land, 2003;Land, Meyer and Smith, 2008;Meyer, Land and Baille, 2010;Land, Meyer and Flanagan, 2016) who first published in 2003.The first TCs applied to economics and engineering, the disciplines of Meyer and Land.Since then, they have been studied in Europe, Australia, and New Zealand in such diverse professions as architecture, literature, social studies, and accounting.A handful of studies identifying TCs in medical education have appeared recently from authors in the United Kingdom (Neve, Wearn and Collett, 2016a;Barradell and Peseta, 2017;Collett, Neve and Stephen, 2017;Neve, Lloyd and Collett, 2017b;Neve, Lloyd and Collett, 2017a). There are four key components of a TC: 1.A TC is transformational; it causes an ontological shift in the way the learner views himself or herself as a person/professional (Mezirow, Taylor and associates, 2009).The learner comes to identify themselves in a new way, and appreciate their role in their new profession in a new way. 2. A TC is integrative; it brings pieces of seemingly unrelated knowledge and attitudes into a whole, when the learner realizes, "Oh that is what they are talking about" "Now it all makes sense!"As faculty, we say "The light bulb went on!" 3. Intertwined with transformation and integration is irreversibility; once the learner embraces the TC, he/she cannot unlearn it.It has become their professional identity. 4. A fourth component is troublesomeness; a TC involves the angst that learners feel as they approach a new way of experiencing themselves and their role, coupled with their fear of this new unknown.We see medical students who want more involvement and responsibility for their patients, but who are simultaneously aware that their knowledge is not complete and are fearful of making a mistake.They oscillate between these two opposing ways of being and struggle to reconcile them.They are facing the uncertainty of medicine.During this troublesome phase, students may become anxious, depressed, cynical, and may contemplate dropping out of medical school. Other descriptors of TCs are "boundedness": the knowledge gained is specific to that discipline (in our study, to medical care)."Discursive" refers to the student using the language of the discipline, and fitting into the community of practice. Often times this may involve a period of mimicry, when the student is aware that they are using a "script" and has not yet transitioned to a natural use of this new language. Collett and Neve and their colleagues in the UK (Neve, Wearn and Collett, 2016b;Collett, Neve and Stephen, 2017;Neve, Lloyd and Collett, 2017a) have studied the TC encountered by medical students captured in audiodiaries, in which they found concepts including appreciating uncertainty, recognizing a bigger picture, not needing to know everything, and an appreciation of the physicians' professional culture.Barradell and Peseta (Barradell and Peseta, 2017) from Australia have provided a qualitative research synthesis of studies of TC in healthcare.They concluded that, taken as a whole, the studies from 2003-2014 included ideas that induct students into complex practice, enable them to work with new knowledge, and promote the development professional and personal agency. TCs are important for faculty to appreciate, although remembering one's own TC experiences is difficult for experienced faculty, since these concepts have become invisibly integrated into our own professional and personal identify (Meyer and Land, 2006).As faculty, we can help students who may be stuck at a juncture with a TC by listening to their concerns, providing non-judgmental discussions, and most, of all, normalizing the experience for the student.That medical students suffer is undeniable, and we have all seen students who become depressed and decide to leave medical school in their clinical years.Ways of dealing with medical student suffering include: small group sessions, opportunities for protected venting, and guidance for reflection (Egnew et al., 2018). We will examine TCs we identified in our pediatric clerks in the US and find a striking similarity with those identified in medical students in the UK, involving a different student population and a somewhat different method of analysis. Methods We used the grounded theory and constant comparison approaches of qualitative analysis to generate a thematic analysis of reflective essays from our 3 rd year pediatric clerks.Reflective essays are a routine part of the curricular requirements at our medical school, and students write many during the course of the four years.The reflective essay in the pediatric clerkship is a requirement of the course and is ungraded.Clerks are informed in their student manual and by email of the assignment.Prompts are provided and the clerks are told that their essays will be de-identified and analyzed using qualitative analysis by a faculty member and one or more 4 th year students.Prompts for the written reflection are: 1. How have you changed since beginning your clerkship year? 2. What is the most important concept you have understood since beginning medical school that enables you to think like a physician? 3. What is the most difficult concept about being a physician you have encountered? 4. Describe any new approaches to life, medicine, or learning that you have developed this year. Each clerk posts their essay on a confidential server, where it is downloaded by an administrator who removes all names and location references.Near the end of the clerkship, the clerks attend a facilitated group session where, if they wish, they can discuss their paper. The essays were divided among three 4 th year medical students and a faculty member.Each pair of researchers followed the same process: independent line-by-line coding, then discussion between the pair of differences until resolution.There were 75 codes developed across the three sets of researchers.There followed independent thematic analysis within the pair, sorting the codes into themes and discussing the themes until there was complete agreement.At the end, there were 11 themes.It is worth noting that the 4 th year medical students showed great insight into the theory of TCs, and readily self-identified with many student struggles.They were able to explain the point of view of the 3 rd year clerks to the faculty member, and so created an understanding among the group. The themes were analyzed to determine which ones rose to the level of a definition of a TC.We considered two characteristics to be essential: transformation and troublesomeness.Thus, within the many verbatim student quotes from their papers, we required evidence of a shift in personal identify and a struggle to achieve that shift. The software programs HyperRESEARCH© and NVivo© were used to keep track of the quotes and codes.The medical students did their initial line-by-line coding using paper copy and colored pencils. Results/Analysis Students generally wrote 1-2 pages.Many were descriptive of intensely emotional encounters with TCs.We collected 135 de-identified essays from the class of 2015.No student opted out.There were 75 codes and 11 themes.We found 1 core phenomenon code that represents the overarching theme for the data. We found 7 TCs, which are illustrated with verbatim student quotes. "Being smart isn't enough" "The knowledge is the prerequisite, but it isn't the thing itself." "Caring for patients requires not only knowing the relevant knowledge, but also developing a human relationship with the patient." "It's about the patient" "Sometime since the first day, and I am not sure when, there came a point where I no longer cared about embarrassing myself or how I did things, but ratherI started caring abut treating the patient and doing the best I could." "that my needs, whether it be hunger, thirst, sleep, a clean house or education, are always going to come second to my patient's needs." "Life isn't fair" "It taught me that not everything with youth has a happy ending -life isn't always fair." "This experience was difficult for me..This girl was supposed to begin her life, not have it end that day.It honestly took me a little while t get excited about medicine again." "Sometimes there isn't a right answer" "Not having the right answer the first time (or even at all) is very frustrating." "The difficult balance between the standard of care and the patient's autonomy." "You can't save everyone" "I had to learn that you can't and won't save everyone, even those who deserve it the most..this experience taught me that sometimes bad outcomes occur even when you do the best you can." "Learning is lifelong" "I find I am relearning everything I was taught with a new perspective." "Now when I study, it's because I want to, not because I have to." "Medicine isn't black and white, but almost always grey" "I have come to understand that not many things in medicine are 'by the book.'""The concept of not having the right answer the first time (or at all), is still very difficult." The overarching theme of the threshold concepts is that "There is a disconnect between what I thought medicine was going to be and the reality."This threshold concept was identified in this and every subsequent data set, and by each medical student co-investigator as the core of what they encountered during their clerkship year. Discussion We read with great interest the papers of Collett and Neve and their associates in the UK, who, too, have studied TCs in medical education.Collett et al and Neve et al used audio diaries to collect students' thoughts after a meaningful experience.They used qualitative thematic analysis to parse the TCs recorded in the diaries.The UK medical students are distinctly different from the students at USUHS: they are younger, have less experience in the health care sector, and are not in the military (as are the USUHS students.)Nonetheless, there was a striking similarity between the TCs that emerged from these studies; in fact, four are nearly identical.Table 1 demonstrates the convergence between the TCs described in the UK study and in our study.We did not initially analyze for boundedness, as did Collett.Neither study found evidence of irreversibility, probably because these studies occurred at a single time point.Taken together, these two studies suggest that there are a group of TCs that are important for medical students to grasp as they approach becoming a physician.The overarching concept confronting students is that the reality of medicine often doesn't match their expectations. Conclusion Our work and that of Collett and Neve illustrate the synthetic themes found by Barradell, especially that of working anew with knowledge in the health sciences ("Being smart isn't enough") and induction into the community of practice ("Medicine isn't black and white").These TCs may be universal to preparation for the responsibility of caring for patients. Alternately, these TCs may vary in medical school settings in societies very different from our own. Our work is just beginning as we seek to understand more about TCs in medical education.We will be looking to partner with a medical school with a different culture, to determine which of these TCs are indeed universal in moving from student to physician-hood. Limitations to our study include that it was conducted in one medical school, at one point in time.Some TCs might have been erroneously coded and not recognized as themes.As more medical education researchers study TC, new or reframed TC will emerge. Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, nor the U.S. Government.Some authors are a military service member or a U.S. Government employee.This work was prepared as part of their official duties. Take Home Messages Threshold concepts is a framework that proposes each profession has ideas that are necessary for students to understand that enable them to "think like" and become a professional.Threshold concepts involve an ontological shift in the learner, and often a struggle to fully integrate into their professional identity. Threshold concepts that may be universal in medical school include: "Being smart isn't enough," "Medicine isn't balck and white," and "Sometimes there isn't a right answer." Students may struggle as they begin io incorporate threshold concepts.Faculty can counsel them by listening, normalizing, and making the thresold concept explicit. Notes On Contributors Virginia Randall, MD MPH, is Associate Professor of Pediatrics at the Uniformed Services University of the Health Sciences, Bethesda, MD.She served for 30 years as a pediatrician on active duty in the U.S. Army and has been at USUHS for 15 years.She always involves medical students as co-investigators in research. Robert Brooks, MD is a Captain in the US Air Force, Class of 2018 at the Uniformed Services University of the Health Sciences.He graduated from Haverford College, PA with a BS in Biology and has been interested in epigenetic inheritance.He will continue his career as a pediatric resident at the Walter Reed National Military Medical Center, Bethesda, MD. Identical Ideas Medicine isn't black and white but grey and complex. Medicine isn't black and white but almost always grey. There is no single or morally correct answer.Sometimes there isn't a right answer. Similar Ideas Being a doctor is more than just treating the symptoms Being smart isn't enough Empathy.Two way conversation.Treating the whole patient. It's about the patient. Idea Found Only in UK Analysis Being a doctor.Thinking like a doctor. Idea Found Only in Our Analysis Medicine is a career of lifelong learning. You can't save everyone. Life isn't fair. Agnes Montgomery, MD is a Captain in the US Army, Class of 2018 at the Uniformed Services University of the Health Sciences.She graduated from Harvard with a BA in French literature and has been active in varsity tennis and the paraolympics.She will continue her career as a pediatric resident at the Tripler Army Medical Center, Honolulu, HI. Lauren McNally is a 2LT in the US Army, Class of 2019 at the Uniformed Services University of the Health Sciences.She graduated from the University of Virginia with a BS in Education (Communication Disorders).She has been active in health care delivery to homeless and rural Americans. Declarations The author has declared that there are no conflicts of interest. Ethics Statement This study was approved the by USUHS IRB as exempt protocol PED 86-4343. External Funding This article has not had any External Funding Acknowledgments Our thanks to the medical students who took the time to write compelling reflective practice essays during their pediatric clerkship. Julie Browne Cardiff University School of Medicine This review has been migrated.The reviewer awarded 3 stars out of 5 This was a really enjoyable and interesting read and the description of Threshold Concepts (TC) will be valuable to anyone who has not encountered this theoretical framework before.It involved the coding and analysis of 135 reflective essays -a substantial piece of work.I particularly liked the practical messages for medical educators who will be supporting students as they tackle this 'troublesome' learning.My only concern is that, even though the results of the work appear to align well with previous studies, the authors indicate that they used only two of Meyer and Land's original criteria to identify threshold concepts in their written data.No reason is given for this decision to modify the framework, and they don't report whether (or how far) they also took into account the integrative, irreversible, bounded and discursive components that characterise a true threshold concept (rather than just an important insight).The integrative component (the 'light-bulb' or 'aha!' moment) is particularly important to the idea of the threshold concept because without it, students will struggle to scaffold and assimilate new knowledge in the future.It is therefore not clear that what the authors actually report can truly be classified as threshold concepts, even though the results certainly appear persuasive. Table 1 . Convergence of Threshold Concepts in Two Medical Schools Collett et al. (2017) School of Medicine and Dentistry, Plymouth University, UK Randall, et al (2017)Uniformed Services University of the Health Sciences, Bethesda, MD, US Faculty of Dentistry, Oral &Craniofacial Sciences.King's College London This review has been migrated.The reviewer awarded 3 stars out of 5 It is always interesting to consider aspects of MedEd as a series of thresholds.I do however feel the article would have been strengthened by mention of some of the work already carried out in areas closer to Medicine than just the original Land and Meyer work.If you haven't come across it before it might interest the authors to look at work carried out in Dentistry -As they say in the article below TC's offer, ''a role for the student voice in offering a novice perspective which is paradoxically something that is out of reach of the subject expert.Finally, the application of threshold concepts highlights some of the weaknesses in the competency-based training model of clinical teaching.'Thresholdconcepts in dental educationhttps://www.ncbi.nlm.nih.gov/pubmed/21985204Competing Interests: No conflicts of interest were disclosed.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2019-03-18T14:02:46.094Z
2018-08-21T00:00:00.000
{ "year": 2018, "sha1": "d3117d97b09c5a040ce76b4aeec8db6188b351d5", "oa_license": "CCBY", "oa_url": "https://www.mededpublish.org/MedEdPublish/PDF/1866-10718.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b63d9d586942ba6c194cf6f2ba0f1e7154f6e325", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Sociology" ] }
213078768
pes2o/s2orc
v3-fos-license
Mejora de la comprensión lectora mediante la formulación de preguntas tipo test lectura, aprendizaje, comprensión lectora La lectura no es únicamente un medio de acceso al conocimiento, sino que constituye una poderosa habilidad que permite aprender y pensar.Leer no siempre conduce al aprendizaje profundo, ese potencial sólo se logra cuando el lector participa en situaciones que le exigen ir más allá del texto.La tipología de recursos que se emplean en el aula para instruir en la capacidad comprensiva suele estar centrada en la respuesta por parte del alumnado a una serie de cuestiones dadas, en muchas ocasiones de carácter literal, siendo escasos los trabajos que se han efectuado respecto al modo en que el estudiante integra la información que lee mediante situaciones que le demandan un aprendizaje activo y estimulan su capacidad de pensar.El propósito de este estudio se centra en analizar si mediante dinámicas centradas en potenciar la reflexión sobre el texto a través de la formulación de una serie de cuestiones con diferentes alternativas de respuesta se mejora la capacidad comprensiva y se fomenta la integración de la información en los conocimientos de los estudiantes.En el trabajo participaron 118 alumnos con edades comprendidas entre los 8 y los 9 años.Los resultados otorgan valor didáctico a este tipo de prácticas lectoras en cuanto que facilitan la mejora de la comprensión lectora y del propio aprendizaje en mayor medida que otro tipo de situaciones de lectura. La comprensión lectora es un proceso de gran complejidad en el que el lector participa de manera activa poniendo en juego una serie de estrategias y conocimientos que le permiten interaccionar con los significados del texto, creando un modelo mental a través de un proceso de construcción de hipótesis e integración de proposiciones (Calero, 2011), lo que de hacerse a través de la formulación de preguntas entre varias personas verbalizando y poniendo en común sus estrategias de lectura puede ser un recurso que puede contribuir en gran medida al logro de los mecanismos necesarios para que el aprendizaje lector sea más eficaz. Con este propósito se efectúa el presente estudio que tiene como finalidad conocer si mediante el desarrollo de la habilidad de formular preguntas atendiendo a su tipología se mejora el aprendizaje y la capacidad comprensiva.Para ello, se compara el grado de comprensión de la lectura en dos muestras de alumnos de entre 8 y 9 años, uno que recibe intervención en el desarrollo de la habilidad para formular preguntas de diferente tipología textual y otro que sigue el programa de trabajo tradicional consistente en la lectura y en la posterior contestación a una serie de preguntas sobre la información del texto.La hipótesis que se plantea es que el alumnado perteneciente al grupo que recibe instrucción sobre las habilidades orientadas a la formulación de diferentes tipos de preguntas obtendrá un mejor rendimiento en el proceso comprensivo de la lectura. Diseño y procedimiento Con la finalidad de analizar el efecto que la intervención de un programa centrado en el desarrollo de las estrategias de compresión basado en la relevancia que la formulación de preguntas presenta en la mejora de la comprensión lectora, se compara el grado de adquisición de la comprensión de la información escrita en dos muestras de estudiantes de entre 8 y 9 años, uno que recibe intervención en el proceso de comprensión lectora mediante la instrucción en estrategias para la formulación de diferentes tipos de preguntas literales, inferenciales, reorganizativas y críticas) y otro que sigue el programa de enseñanza de la lectura mediante la contestación a una serie de cuestiones que se presentan en los libros de texto a través de la clase magistral tradicional.Nuestra hipótesis es que el alumnado perteneciente al programa de intervención en los que sus integrantes verbalizan y ponen en común las estrategias que se establecen a través de un texto para la formulación de sus propias cuestiones, obtendrá un mejor rendimiento en el desarrollo de la comprensión lectora. Por su parte el grupo control realizaba las lecturas de manera individual y colectiva para posteriormente contestar a las cuestiones de manera individual y grupal, finalizándose con la corrección conjunta de todo el grupo clase guiada por el docente. Uno de los motivos que pueden explicar esta mejora hace referencia a que cuando el lector busca información en el texto para formular preguntas necesita analizar e integrar la información que lee relacionando el contenido de varias secciones del texto, de modo, que es probable que la realización de ambos procesos, la generación de preguntas, junto con la búsqueda de respuestas pertinentes a las preguntas generadas por los compañeros sea un factor que contribuya a una mayor profundización en la comprensión del texto. Se ha puesto de manifiesto en este trabajo que la comprensión lectora no surge de manera automática como producto de la decodificación, sino que es necesario además desarrollar estrategias de comprensión, resultados que son consistentes con la teoría actual.Analizando los diferentes niveles de comprensión escrita, se el grupo participante en el programa presenta un nivel superior de dominio que los alumnos del grupo control en las estructuras sintácticas, en las que el componente decodificador interviene de manera sustancial en el conocimiento de las relaciones semánticas entre las distintas partes de la oración, lo que pone de manifiesto que las estrategias lectoras influyen ya desde el nivel oracional.El acto de decodificación es una herramienta necesaria para el acceso al mundo letrado y mediante ella accedemos a descifrar el lenguaje escrito, pero ante unidades lingüísticas mayores se demanda un papel más activo del lector para poner en juego sus propios esquemas de conocimiento e interactuar con la información escrita. En definitiva, los progresos alcanzados por parte de los participantes del programa de intervención constatan su eficacia, lo que favorece la adquisición de las estrategias que el lector emplea para comprender un texto.Por lo que a nivel práctico se sugiere el diseño de programas orientados al desarrollo de las estrategias lectoras a partir de la instrucción en las habilidades para formular preguntas atendiendo a las diferentes tipologías textuales, tal y como se ha efectuado en este trabajo.De igual modo, sería de interés que en futuras investigaciones se considerara la influencia de estas variables en programas reeducativos orientados a la superación de las dificultades de aprendizaje de la lectura como una estrategia para la mejora de la capacidad comprensiva. Reading is not only a means of access to knowledge, but it is a powerful skill that allows learning and thinking.Reading does not always lead to deep learning, that potential is only achieved when the reader participates in situations that require him to go beyond the text.The typology of resources used in the classroom to instruct in the comprehensive capacity is usually focused on the response by students to a series of questions given, in many cases of a literal nature, the work that has been done regarding the how the student integrates the information he reads through situations that demand active learning and stimulate his ability to think.The purpose of this study is to analyze whether through dynamics focused on enhancing the reflection on the text through the formulation of a series of questions with different Response alternatives, comprehensive capacity is improved and the integration of information into the student knowledge 118 students with ages between 8 and 9 years participated in the work.The results give didactic value to this type of reading practices in that they facilitate the improvement of reading comprehension and learning itself to a greater extent than other types of reading situations. Introduction The learning of reading is not only a means to access knowledge but is a skill that develops the ability to learn and think (Norris & Phillips, 2009;Solé, 2012).However, the reading process does not lead directly to think or generate knowledge, this potential is only achieved when the reader participates in situations that require him to go beyond the text and enter into its analysis and interpretation.The strategy focused on the formulation of questions demands a series of knowledge and a level of reflection on the reading process itself that requires the mastery of a series of skills and a high level of cognitive complexity.This task in turn implies a mastery of the automatic components that intervene in the comprehension, since if the reader shows difficulties in the superficial levels, he will not be able to have the competence to ask good questions.In other words, when the reader's cognitive resources are designated to solve basic tasks such as reader decoding, they will hardly be able to devote themselves to other more complex demands. It is known that students who show good reading skills during the first years of their schooling are not necessarily able to learn from reading, especially when it comes to expository texts, textbooks or other disciplinary texts (Solé, 2018).This is because there are a number of basic skills including decoding, knowledge of conventional aspects, recognition of common words, making predictions in narrative texts or simple expository texts, which must be mastered before facing other skills of a more complex nature.This is evidenced by data from the latest PISA report published (OECD, 2015), about 20 % of participants could not prove mastery of basic reading skills, another 23 % showed fairly limited skills in performing simple reading tasks in texts not too complex, while only 6.5 % of participants were at the highest levels, which qualify for more complex reading, indicating that more is needed to improve textual understanding (Gutiérrez, 2016;Silvestri, 2006). Research carried out on the teaching of reading comprehension, as well as that analysing the strategies used in didactic situations, indicate that teachers frequently and naturally use questioning as a didactic resource (Esteban, 2017;Solé, 2018;Tough, 1989).Normally, by formulating questions at the end of the text, either to check what has been understood, to help recapitulate what has been read, or to guide attention to certain aspects of written information.It is also common to find questions in textbooks after the readings in order to help students remember certain information.However, the questions that arise with respect to textbooks in schools are not similar, nor do they present the same degree of complexity, although there is agreement on the differentiation of three main types of questions (Goldman & Duran, 1998;Raphael & Au, 2005;Rouet, 2006).The so-called literal questions that are aimed at identifying data or locating explicit information in fragments of the text and are associated with understanding of a more superficial nature, inferential questions require integrating information and elaborating an interpretation of the text and are associated with a deeper understanding, and critical questions that lead to evaluate the information and analyze the text personally.All three types of questions are relevant to textual comprehension, to learning and to increasing knowledge. The ability to formulate questions is a fundamental component for learning to understand a text (Hoyos & Gallego, 2017), a facet in which not only superficial levels such as lexical processing, syntactic or the construction of representation of explicit information intervene, but also the deepest levels of comprehension, which implies a true learning from the texts, which allows increasing the reader's knowledge (Kintsch, 1998;Solé, 2018).However, not all questions are equally important, nor do they contribute in the same way to learning, so it is important to know whether through the development of certain didactic proposals it is possible to improve comprehension capacity. Reading comprehension is a highly complex process in which the reader actively participates in an active way, putting play a series of strategies and knowledge that allow the reader to interact with the meanings of the text, creating a mental model through a process of hypothesis construction and integration of propositions (Calero, 2011), what to do through the formulation of questions among several people verbalizing and sharing their reading strategies can be a resource that contributes greatly to the achievement of the mechanisms necessary for reader learning to be more effective. For this purpose, the present study is being carried out in order to find out whether, through the development of the ability to formulate questions in accordance with their typology, learning and comprehension are improved.To this end, the degree of reading comprehension is compared in two samples of students between the ages of 8 and 9, one who receives intervention in the development of the ability to formulate questions of different textual typology and the other who follows the traditional work programme consisting of reading and then answering a series of questions about the information in the text.The hypothesis is that the students belonging to the group that receives instruction on skills oriented to the formulation of different types of questions will obtain a better performance in the comprehensive process of reading. Participants The study included 118 students between the ages of 8 and 9 (M = 8.36; TD = 0.37), of whom 49.2 per cent were boys and 50.8 per cent girls.Contingency analysis (Pearson chi-square) between condition and sex does not show statistically significant differences (χ² = 0.58, p > .05).All participants share the characteristic of being located in a socio-cultural context of medium level. Design and procedure In order to analyze the effect that the intervention of a program focused on the development of compression strategies based on the relevance that the formulation of questions presents in the improvement of reading comprehension, compares the degree of acquisition of the understanding of the information written in two samples of students between 8 and 9 years old, one that receives intervention in the process of reading comprehension through instruction in strategies for the formulation of different types of literal, inferential, reorganizing and critical questions) and another that follows the program of teaching reading by answering a series of questions presented in textbooks through the traditional master class.Our hypothesis is that students belonging to the intervention program in which their members verbalize and share the strategies established through a text for the formulation of their own questions, will obtain a better performance in the development of reading comprehension. The design is quasi-experimental of pretest-posttest repeated measures with control group (Campbell and Stanley, 2005).Before and after implementing the intervention program, a battery of two evaluation instruments was applied to all experimental and control participants in order to measure the dependent variable (learning of reading comprehension). The initial evaluation of the students was carried out collectively in the ordinary classroom in the month of January and during school hours.Subsequently the intervention program was implemented (3 sessions of 50 minutes weekly), the experimental students were distributed in interactive teams and those of the control group individually according to the structure of the traditional classroom.In March, at which time the program had already been fully implemented, the evaluation was carried out again for all students with the same instruments.The study respected the ethical values required in research with human beings (informed consent, right to information, protection of personal data, guarantees of confidentiality, non-discrimination, gratuity and the possibility of leaving the program in any of its phases). Evaluation instruments • ACL-4 test (Catalá, Catalá, Molina & Monclús, 2001).It is composed of a series of texts of different textual typology: narrative, expository and rhetoric.It values reading comprehension by answering literal, inferential, information reorganization and critical appraisal questions.For each of the issues raised, the right choice must be made between five alternatives.These are short texts, but with an internal structure that allows inferring, hierarchizing, organizing information and establishing relationships between phrases.One point is awarded for each correct answer.The test has a Cronbach .80 reliability coefficient. • Evaluation of reading processes.Two subtests of the PROLEC-R test (Cuetos, Rodríguez, Ruano & Arribas, 2007) were used to evaluate reading.Specifically, grammatical structures and sentence comprehension tests were used to evaluate semantic processes.The total score in each of these tests is obtained by assigning one point to each correct answer.This test presents a Cronbach reliability coefficient of 0.79. • Escala de conciencia lectora (ESCOLA) (Puente, Jiménez & Alvarado, 2009).It is a questionnaire for the evaluation of metacognitive skills related to reading.The items evaluate: reading planning (resources for the search of information, attitude, selection of reading strategies), supervision (level of adjustment between the attention and the effort to be made, use of strategies for the selection of the relevant information of the text, level of self-efficacy in the knowledge of the reading tools), and evaluation (control of the reading performance, verification of the suitability of the strategies used, recognition of the results obtained).It is intended for students between the ages of 8 and 13.It consists of 56 items but can also be applied in two reduced versions of 28 items each (School 28-A and School 28-B).Each item has three response options that are scored with 0, 1, 2. The test has a Cronbach reliability coefficient of 0.95. Intervention programme The reading comprehension programme used consists of narrative-type texts structured into 15 50-minute sessions.The objective was to explicitly develop cognitive strategies that allow the reader to construct the meaning of the text from previous knowledge, as well as to acquire the necessary skills to regulate and control the entire comprehension process.To this end, the participants of the experimental group were instructed in the formulation of questions of different types: literal, inferential and critical through a series of questions that guided the process of analysis and reflection of the students.These questions had a multiple format with several answer options that were elaborated, both individually and in groups, and were later exchanged between the colleagues who answered them and then corrected each other, ending with a sharing guided by the teacher. For the formulation of literal questions, attention was paid to the recognition of explicit information in texts through situations aimed at identifying details, sequencing events and facts in a chronological manner, capturing the meaning of words and sentences and remembering details and passages of the content.Among the clues offered for its formulation are the following: what, where, how, when, for what, with whom? The inferential questions were oriented to make relationships between parts of the text to interpret and make deductions from the information that appears explicitly.In order to teach them, they tried to predict information, deduce messages, infer the meaning of words by their context, prepare small summaries and propose titles to small texts.The questions that were used for its exercise were The critical questions were aimed at making value judgments, critical analysis, making arguments to sustain opinions, distinguishing a fact from an opinion, analyzing the author's intention and drawing conclusions from the content, all of which generated a dialogical interaction between the different companions of the group that led to mutual reflection and shared learning.Among the questions that were used for his On the other hand, the control group carried out the readings individually and collectively to later answer the questions individually and in groups, ending with the joint correction of the whole group class guided by the teacher. Results In order to analyse the change in the variables under study, descriptive analyses were carried out with the scores obtained in the tests administered in the pretest, postest and post pretest difference, as well as variance analysis with the pretest scores (MANOVAs, ANOVAs) and covariance analysis (MANCOVAs, ANCOVAs) of the post pretest differences in the experimental and control groups in the variables measured before and after the intervention.The statistical program SPSS was used in the analysis of the data.Kolmogorov-Smirnov and Levene tests were applied to check the normality and homocedasticity of the sample.The level of significance was set at p < .05.Means and standard deviations of variables were calculated and multivariate variance analyses (MANOVA) were performed to study comparisons between groups to delimit between which groups significant differences occurred.In addition, the effect size (Cohen d) was calculated (small < .50; moderate between .50 and .79; large ≥ .80).The results of the MANOVA pretest for the set of variables showed that before the intervention there were no significant differences between the experimental and control groups, F(1.52)=1.73,p > .05.However, MANCOVA results for post-pretest differences, using pretest scores as covariates were significant F(1.52)=2.38,p < .05.These data show that the intervention programme had a significant effect.In order to analyze the change in each variable, descriptive and variance analyses were carried out, which are presented in Table 1. Changes in comprehension at syntactic level In order to analyze the effectiveness of the program in the development of reading comprehension in decoding effectiveness, the changes in the scores obtained in the PROLEC-R Test were studied.MANOVA pretest did not show significant differences between experimental and control, F(1.52)=2.34,p >.05, however, the results of MANCOVA postest-pretest, F(1.52)=1.56,p < .05,confirmed significant differences between the two conditions.With respect to the analysis of each variable independently in the grammatical structures, a greater increase was observed in the experimental ones (M =. 96) than in the control group (M = .39).The ANOVA pretest results showed that at this stage there were no significant differences between experimental and control, F(1.52)=7.42,p > .05.However, post-pretest ANCOVA showed statistically significant differences between conditions, F(1.52)=12.56,p < .001.The effect size was moderate (r = .63).In the sentence comprehension variable, there are also higher increases in the experimental ones (M = 1.20) compared to those of the control group (M = .51).ANOVA pretest results showed that a priori there were no significant differences between both conditions, F(1.52)=8.64,p < .05,performing an ANCOVA of post-test differences that indicated significant differences, F(1.152)=15.64,p < 001, being the size of the large effect (r = .76).This highlights an improved ability to recognise written information at syntactic level attributable to the intervention programme. Changes in relational semantic understanding In order to evaluate the impact of the program on relational semantic comprehension in small texts, changes in ACL-4 scores were analyzed.The MANOVA pretest carried out with the set of the four measured variables (literal, inferential, reorganizative and critical comprehension) did not show significant differences in the pretest phase between experimental and control, F(1.152)=2.43,p > .05.However, significant differences were found in the post-pretest MANOVA, F(1.152)=3.61,p < .01,as well as in the post-pretest MANCOVA, F(1.152)=4.21,p < .01.As can be seen in Table 1, in the variable literal comprehension, the experimental sample obtains an increase (M = .48),higher than that achieved by the control group (M = .22).ANOVA pretest results showed that at this stage there were no significant differences between experimental and control, F(1.152)=.231,p > .05.However, ANCOVA data from postpretest differences showed significant results, F(1,152)=2.40, p < .05.The size of the effect was small (r = .26).In the inferential comprehension variable, the experimental sample obtains an increase (M = .68), higher than that achieved by the control group (M = .13).The results of the ANOVA pretest showed that at this stage there were no significant differences between experimental and control, F(1.152)=.325, p > .05.However, ANCOVA data from post-pretest differences showed significant results, F(1.152)=7.43,p < .001.The effect size was moderate (r = .63).In the reorganizative comprehension variable, there are also higher increases in the experimental ones (M = .64)compared to those of the control group (M = .19).The results of the ANOVA pretest showed that a priori there were no significant differences between the two conditions, F(1.152)=.439, p < .05,with an ANCOVA of post-pretest differences indicating significant differences, F(1.152)=9.52,p < 001, with moderate effect size (r = .67).As in the previous variables, in the case of critical comprehension also the experimental group overcomes in its difference of post-pretest means (M = .61)to the subjects of the control group (M = .18).The ANOVA pretest showed that before starting the intervention there were no significant differences between experimental and control F(1.152)=.286,p > .01,performing an ANCOVA of the post-pretest differences that also indicated significant differences, F(1.152)=8.73,p < .001.The effect size was moderate (r = .62).These data highlight a significant improvement in the development of relational semantic understanding in textual structures attributable to the intervention program, as evidenced by the significant increase in literal, inferential, reorganizational, and critical understanding. Changes in metacognitive reading skills To assess whether the program was effective in developing the metacognitive skills involved in learning to read, changes in scores achieved on the Reading Consciousness Scale (ESCOLA) were analyzed.The MANOVA pretest carried out for the set of test variables showed that there were no significant differences in the pretest phase between experimental and control, F(1.152)=3.15,p > .05.However, significant differences were found in the post-pretest MANOVA, F(1.152)=2.37,p < .01,as in the post-pretest MANCOVA, F(1.152)=3.28,p < .01.As can be seen in Table 1, in the planning variable the experimental group obtains an improvement (M = .55),greater than that obtained by the control group (M = .15).The results of the ANOVA pretest showed that at this stage there were no significant differences between experimental and control, F(1.152)=.308, p > .05.However, ANCOVA data from post-pretest differences indicated significant results, F(1.152)=5.06,p < .01.The effect size was moderate (r = .53).In the supervision variable there are also higher increases in the experimental ones (M = .67)compared to those of the control group (M = .24).In the ANOVA pretest the data obtained showed that at this stage there were no significant differences between experimental and control, F(1.152)=.145,p > .05.However, ANCOVA post-pretest differences data indicated significant results, F(1.152)=7.26,p < .01,with moderate effect size (r = .65).There was also a significant trend improvement in evaluation, with a greater increase in the experimental (M = .70)than in the students belonging to the control group (M = .28).The results of the ANOVA pretest showed that at this stage there were no significant differences between experimental and control, F(1,152)=.243, p > .05.However, ANCOVA data from post-pretest differences indicated significant results, F(1.152)=6.54,p < .01.The effect size was moderate (r = .62).These data show an improvement in the learning of reading strategiesattributable to the intervention programme implemented. Discussion and conclusions The objective of this work was to test whether the intervention of a program oriented to the development of the skills that contribute to the formulation of questions of different typology on the textual content produced an improvement in the comprehension capacity of reading.The results obtained indicate that this type of teaching contributes significantly to the improvement of reading comprehension.These data are in line with the statements of Gutiérrez-Braojos & Salmerón (2012) who point out that students can improve the use of strategies if appropriate learning experiences are implemented. One of the reasons that may explain this improvement refers to the fact that when the reader looks for information in the text to formulate questions, he needs to analyze and integrate the information that he reads relating the content of several sections of the text, in such a way that it is probable that the realization of both processes, the generation of questions, together with the search for pertinent answers to the questions generated by the companions will be a factor that contributes to a deeper understanding of the text. It has been shown in this work that reading comprehension does not arise automatically as a product of decoding, but that it is also necessary to develop comprehension strategies, results that are consistent with current theory.Analyzing the different levels of written comprehension, the group participating in the program presents a higher level of mastery than the students of the control group in the syntactic structures, in which the decoder component intervenes in a substantial way in the knowledge of the semantic relations between the different parts of the sentence, which shows that the reading strategies already influence from the oral level.The act of decoding is a necessary tool for access to the literate world and through it we access to decipher the written language, but in the face of larger linguistic units a more active role of the reader is demanded in order to play their own schemes of knowledge and interact with written information. These contributions coincide with the postulates of Bohórquez, Cabal & Quijano (2014) in pointing out that learning to read is a process that occurs sequentially, and it is very important that comprehension strategies are developed from the beginning of the reading process, when the alphabetic principle is acquired and the apprentices become competent decoders. With respect to syntactic-semantic comprehension, the students belonging to the experimental group manifest a higher level of mastery, being those who obtain higher degrees of comprehension when the demand for written information is higher and more complex cognitive levels are demanded.This situation begins with regard to the integration of information at a literal level, differences that increase when the demands are oriented to the realization of inferences on written information, in the rearrangement of the main ideas, as well as in the capacity to value the textual content, express opinions, issue judgments, formulate questions when integrating the information that is being read into one's own experiences and cognitive schemes. The contributions found in this work coincide with other previous studies in which it is evident the importance that the exercise in the generation of questions presents for the improvement of reading comprehension (Gutiérrez, 2016;Solé, 2014;Zárate, 2015).However, at present this is not a practice commonly used in schools, so from the results of this study we recommend the implementation of programs similar to the one carried out in this work from the first school levels, inasmuch as there is evidence indicating that students with reading difficulties employ few strategies of comprehension, manifest deficiencies both in the construction of the structured representation of the text, as in the making of inferences and in the use of metacognitive knowledge (Cano, García, Justicia & García-Berbén, 2014;Ripoll & Aguado, 2014) from the first levels of compulsory schooling. As for the development of metacognitive reading skills, the improvements of the students of the experimental group show the achievements in the different learning strategies of the reading process, which contribute to the awareness of the comprehension process.This can be seen in the analysis of the results of the study, since relevant progress is observed in the different strategies involved in textual understanding: planning, monitoring and evaluation. In short, the progress made by the participants in the intervention programme shows its effectiveness, which favours the acquisition of the strategies that the reader uses to understand a text.Therefore, at a practical level, it is suggested to design programs oriented to the development of reading strategies based on the instruction in the skills to formulate questions according to the different textual typologies, as it has been done in this work.Similarly, it would be of interest in future research to consider the influence of these variables on re-educational programs aimed at overcoming learning difficulties in reading as a strategy for improving comprehension skills. the following: why, what would happen before..., what does it mean when..., what title would you put, what is it, what does it mean, what relationship is there...? instruction are: what do you think?, what do you think?, how should it be?, what do you think?, what do you think of...?, what would you have done? Table 1 . Typical Means and Deviations in textual reading comprehension and results of the analysis of variance and covariance for the experimental and control group.
2019-12-19T09:13:58.447Z
2019-12-17T00:00:00.000
{ "year": 2019, "sha1": "1e2de3f134347bc2530974edeb569838a085f5bf", "oa_license": "CCBYNCSA", "oa_url": "https://www.comprensionlectora.es/revistaisl/index.php/revistaISL/article/download/286/124", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e6b76d1e72d0c08c7a458f4fc643a97668d5f658", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Psychology" ] }
249595438
pes2o/s2orc
v3-fos-license
Crystal structures of three zinc(II) halide coordination complexes with quinoline N-oxide The structures of the three related compounds dichloridobis(quinoline N-oxide-κO)zinc(II); dibromidobis(quinoline N-oxide-κO)zinc(II) and diiodidobis(quinoline N-oxide-κO)zinc(II) are presented. Herein we report the crystal structures of three complexes of quinoline N-oxide (QNO) with zinc(II) chloride, bromide and iodide. All three were obtained by 1:2 stoichiometric reaction of the zinc(II) halide with QNO in methanol and found to be mononuclear ZnX 2 (QNO) 2 complexes with a distorted tetrahedral environment around the zinc ion. Structural commentary Compound (I) crystallizes in the monoclinic space group P2 1 (Fig. 1), whereas compounds (II) (Fig. 2) and (III) (Fig. 3) both crystallize in the monoclinic space group P2 1 /c. Each structure contains one symmetrically independent molecule, the coordination sphere around each Zn atom being a distorted tetrahedron. Selected bond lengths and angles in these complexes are shown in Table 1. Compounds (II) and (III) are isostructural in both the molecular conformation and crystal packing, while (I) differs in both aspects, as illustrated by an overlay of molecules (I) and (II) (Fig. 4a) on one hand, and molecules (II) and (III) on the other (Fig. 4b). Most notably, (I) differs in the orientation of the QNO rings relative to each other, the C2-N1-N2-C11 torsion angles being À16.9 (5) in (I) versus À113.9 (3) in (II) and À111.6 (3) in (III). Hirshfeld surface analysis The intermolecular interactions were further investigated by quantitative analysis of the Hirshfeld surface, and visualized with Crystal Explorer 21 (Spackman et al., 2021) atoms, normalized by the van der Waals (vdW) radii of the corresponding atoms (r vdW ). Contacts shorter than the sums of vdW radii are shown in red, those longer in blue, and those approximately equal to vdW as white spots. For (I), the most intense red spots correspond to the intermolecular contacts O1Á Á ÁC9(1 À x, y À 1 2 , 1 À z) [3.048 (9) Å ] and the hydrogen bond C18-H18Á Á ÁCl2(x, y + 1, z). The latter has the distances HÁ Á ÁCl = 2.53 Å (for the C-H distance normalized to 1.083 Å ) and CÁ Á ÁCl = 3.416 (9) Å within the previously observed range but shorter than the average values of 2.64 and 3.66 Å , respectively (Steiner, 1998). The other chloride ligand, Cl2, forms four HÁ Á ÁCl contacts of 2.83-2.98 Å , more typical for van der Waals interactions (Rowland & Taylor, 1996). For (II) and (III), the red spots correspond to C-HÁ Á ÁX interactions, viz. C18-H18Á Á ÁX1, C5-H5Á Á ÁX1, C16-H16Á Á ÁX2, and C9-H9Á Á ÁX2, which can be also regarded as weak hydrogen bonds (Steiner, 1998 Analysis of the two-dimensional fingerprint plots (Table 2) indicates that HÁ Á ÁH contacts are the most common in all three structures. XÁ Á ÁH contacts make the second highest contribution, which increases in the succession (I) < (II) < (III), together with the size of the halogen atoms and hence their share of the molecular surface (16.9, 18.5 and 20.6%, respectively). Interestingly, -stacking in the structures of (II) and (III) gives only a modest increase of CÁ Á ÁC contacts compared to (I), probably because it is counterbalanced by an overall decrease of carbon atoms' share of the surface (21.4 > 19.5 > 18.3%). No halogenÁ Á Áhalogen contacts are observed in any of the three structures. Synthesis and crystallization The water content of QNO and ZnBr 2 have been determined by Thermal Gravimetric Analysis. The formulation for each was found to be QNOÁ0.28H 2 O (M W = 150.21 g mol À1 ) and ZnBr 2 Á0.86H 2 O (F W = 240.69 g mol À1 ). The title compounds were all synthesized in a similar manner. Compound (I) was synthesized by dissolving 0.0986 g of QNOÁ0.28H 2 O (0.656 mmol, purchased from Aldrich) in 33 mL of methanol to which 0.0440 g of ZnCl 2 (0.176 mmol, purchased from Strem Chemicals) were added at 295 K. The solution was covered with parafilm then allowed to sit; X-ray quality crystals were grown by slow evaporation at 295 K. Yield, 0.0822 g (60.2%). Selected IR bands (ATR-IR, cm Hirshfeld surface for (I) mapped over d norm . Figure 9 Hirshfeld surface for (II) mapped over d norm . Figure 10 Hirshfeld surface for (III) mapped over d norm . Compound (II) was synthesized by dissolving 0.0983 g of QNOÁ0.28H 2 O (0.654 mmol), in 40 mL of methanol to which 0.0778 g of ZnBr 2 Á0.86H 2 O (0.323 mmol, purchased from Alfa Aesar) were added at 295 K. The solution was covered with parafilm then allowed to sit; X-ray quality crystals were grown by slow evaporation at 295 K. Yield, 0.0866 g (46.7%). Compound (III) was synthesized by dissolving 0.0517 g of QNOÁ0.28H 2 O (0.352 mmol) in approximately 36 mL of methanol to which 0.0524 g of ZnI 2 (0.164 mmol, purchased from Aldrich) were added at 295 K. The solution was covered with parafilm then allowed to sit; X-ray quality crystals were grown by slow evaporation at 295 K. Yield, 0.0910 g (52.3%). Infrared spectroscopy confirms the presence of the QNO ligand in all three complexes. Characteristic IR bands include weak C-H aromatic stretches observed from 3020-3107 cm À1 and N-O stretches of the bound N-oxide in the range 1350-1150 cm À1 ; notably, a medium band observed in the ligand at 1311 cm À1 , appears at between 1225-1227 cm À1 in the three metal complexes. Finally, a broad absorbance in the free ligand from 3100-3500 cm À1 (assigned to the water O-H stretch) is absent in all of the metal complexes (Mautner et al., 2016). Refinement Crystal data, data collection and structure refinement details are summarized in Table 3. All carbon-bound H atoms were positioned geometrically and refined as riding: C-H = 0.95-0.98 Å with U iso (H) = 1.2U eq (C). (Dolomanov et al., 2009); software used to prepare material for publication: OLEX2 (Dolomanov et al., 2009). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Dibromidobis(quinoline N-oxide-κO)zinc(II) (II) Crystal data where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.55 e Å −3 Δρ min = −0.35 e Å −3 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq Zn1 0.25508 (4) 0.26213 (9) 0.37264 (4) 0.0514 (2) (11) Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
2022-06-12T15:10:28.423Z
2022-06-10T00:00:00.000
{ "year": 2022, "sha1": "1b386e74bd8c3bc103ce8aecc1a70712b85e5d79", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/e/issues/2022/07/00/zv2014/zv2014.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "438268ee8f68e02d5fe4a5fbf4dcfba33e499f44", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
227520784
pes2o/s2orc
v3-fos-license
Prioritising the global response to curb the spread of COVID-19 in the fragile settings of the Global South Globalisation impacts the epidemiology of communicable diseases, threatening human health and survival globally. The ability of coronaviruses to spread, quickly and quietly, was exhibited with Severe Acute Respiratory Syndrome in 2002–2003 and, more recently, with COVID-19. Not sparing any continent, the World Health Organization declared a COVID19 pandemic on 11 March 2020. Despite high-income countries being inordinately impacted, due to the increasing number of COVID19 cases, SARS-CoV-2 continues to represent a looming threat to the Global South, leading the World Health Organization to previously state that ‘Our biggest concern continues to be the potential for COVID19 to spread in countries with weaker health systems’ and that Africa could become the next epicentre. However, while academics, public health experts and macroeconomists discuss among themselves, using collaborative strategies to reduce morbidity, mortality and economic devastation, these discussions have not involved lowand middle-income countries. COVID19 may cause unprecedented humanitarian health needs in countries already subjected to unaffordable, fragmented and fragile health systems; as COVID-19 unfolds a worldwide economic crisis, with the poor and other vulnerable groups affected disproportionately, building health system resilience, through an urgent and coordinated global response, that allocates resources and funds efficiently, must be prioritised in this dynamic and shifting pandemic. Global health threats Weak health systems and governance Health system performance depends on a country's economic performance. Low-income countries spend $41 per person on healthcare compared to $2937 per person in high-income countries. 4 According to the World Health Organization, those aged over 60 years, and those with underlying medical conditions, are at highest risk of COVID-19; 5 69% of those aged 60 and over reside in low-and middle-income countries. 6 With limited specialist expertise for older people and finite domestic and external funds, health systems in low-and middle-income countries focus less on this population. There is a deficit of 2.4 million healthcare workers across Africa; 7 Africa has 2.3 healthcare workers per 1000 while the Americas have 24.8. 7 COVID-19 has overwhelmed health systems worldwide; even the wealthiest countries lack sufficient personal protective equipment, ventilators and hospital beds to meet pandemic demands. An inadequate response in low-and middle-income countries will lead to more acute shortages and adverse effects, including COVID-19 transmission to healthcare workers, further reducing numbers on the front line. Furthermore, the USA is readily accepting international medical graduates. 8 Despite acute demand, this is likely to cause an exodus from low-income countries, worsening the already-existing healthcare worker deficit. Indecision and ambiguity in implementing containment measures is an important issue. In Bangladesh, for example, the government initially declared 'holiday' to keep its citizens indoors. However, this was taken in its literal value and families rushed to tourist sites and to their country homes. Similarly, for political reasons, the government took a long time to effectively restrict mosques to offer congregational prayers. 9 In the mitigation front, the slow expansion of testing facilities leads to an unclear picture on the actual state of the disease. Added to this is the shortage, and often poor quality, of personal protective equipment. Disease burden Tuberculosis, human immunodeficiency virus, malaria and other communicable diseases disproportionately affect sub-Saharan countries. Lassa fever is endemic in Nigeria and Guinea. The health systems of Sierra Leone and Liberia were recently challenged by Ebola, causing over 11,000 deaths. 10 Additionally, non-communicable diseases account for 45% of Eritrea's mortality. 11 Burdened by communicable diseases and malnutrition, sub-Saharan and South Asian countries are particularly vulnerable to outbreaks. The world's poorest populations are at a heightened risk of comorbidities, increasing their risk of severe of COVID-19 illness, poverty-related out-of-pocket payments and catastrophic health expenditures. Furthermore, diverting resources into the emergency response will perpetuate additional adverse effects, including increasing morbidity and mortality from other diseases. Dense populations, man-made and natural disasters The health security of millions around the world is at risk due to state fragility and the inadequate global response to conflicts and social unrest. While COVID-19 public health measures are basic, implementation in low-and middle-income countries remains challenging. Many low-income countries lack access to water supply, sanitation, hygiene and affordable sanitisers. Social distancing and isolation are proving to be impossible in densely populated areas, including slums, where 66.2% of low-income country urban populations reside. 12 Refugees also live in densely populated spaces with limited access to sanitation and healthcare, demonstrating the impact of migration on health. Three of five countries hosting the largest numbers of refugees globally are low-income countries; 13 Pakistan, Uganda and Sudan host 1.4, 1.2 and 1.1 million refugees, respectively. 13 Sudan is also challenged by natural and man made disasters. Due to its proximity and reliance on the Nile River, flooding and droughts occur frequently. In fact, in September 2020, Sudanese authorities designated the country a natural disaster zone and declared a state of emergency for three months over its worst-in-a-century floods that killed 99 individuals, destroyed 100,000 homes and affected over half a million people. 14 According to experts, climate change, which impacts the Global South more severely, is largely responsible for the Blue Nile rising to a record 17.58 metres this year. 14 While the successful 2019 Sudanese Revolution has paved the way for democracy, the country's longstanding internal conflict and outbreaks of violence have subjected its population to traumatic diseases. Moreover, as international leaders focus on domestic COVID-19 issues, the Security Council postponed its political mission of supporting Sudan's transition to civilian rule, delaying conflict resolution mechanisms. 15 Hosting 3.7 million refugees, Turkey is the largest refugee host globally; 13 while part of the Global North, ensuring health system resilience in Turkey is equally important. Lebanon and Jordan host the largest numbers of refugees per capita. 13 Hosting 1.2 million Rohingya refugees, Bangladesh's Kutupalong camp accommodates over 625,000 refugees, rendering it the world's largest refugee settlement. 16 Conflicts impact water and food security; 60% of those experiencing chronic hunger live in conflictaffected low-and middle-income countries. 17 The West Bank's aquifers are controlled by Israel, with 83% of water used by Israel; 17 scarce water supply and the ongoing blockade in the occupied Palestinian territory will facilitate COVID-19 spread among Palestinian populations, aggravating economic and agricultural difficulties while worsening the health and wellbeing of residents. Syria's Al-Hol camp, sheltering over 70,000 individuals, is of notable concern due to scarcity of water, food and medical services. 15 Since 2015, over 24 million people in Yemen require humanitarian assistance. 15 Additionally, to prevent COVID-19 spread, Yemen's government has banned international flights, reducing relief efforts on the ground. 15 Over the last decade, Haiti has been devastated by torrential rains, hurricanes and earthquakes. In 2010, the earthquake killed and injured 222,570 and 300,572 individuals respectively. 18 Approximately 2.3 million people were displaced, 302,000 representing children. 18 The response was further complicated by a cholera outbreak, killing 5899 individuals, and Hurricane Tomas which caused significant flooding and deaths. 18 Transport access and vulnerability of women Availability and affordability of transport systems represent barriers to healthcare access. Untimely access is common, especially in rural populations; in Malawi, the median travel time from home to a medical centre and a central hospital takes 1 h and 2.5 h, respectively; 19 5% of COVID-19 patients require critical care, including assisted ventilation, demonstrating the importance of sustainable transport access in reducing mortality. Real-time evidence demonstrates that men are more at risk of severe COVID-19 illness and death; however, women in low-and middle-income countries may be more vulnerable to COVID-19 due to gender norms and roles, highlighting gender and health inequalities. Women represent 70% of the health and social sectors. 20 During the 2014-2016 Ebola outbreak in West Africa, women were more prone to infection due to their role as front-line healthcare workers and caregivers; with limited transportation, women will, predominantly, become the caregivers of COVID-19 patients, hindering women's careers and ability to earn. As observed in emergency situations, including Ebola, diversion of resources may also increase maternal mortality. 21 Furthermore, increased domestic violence, against women and children, during lockdowns is too well-known. Harnessing the opportunity While fragile settings may exacerbate COVID-19 spread and complicate response, there is an opportunity to harness strengths developed by low-and middle-income countries. To date, Africa is the continent least impacted by COVID-19. African governments promptly shut down borders, denying visas and banning entry of passengers arriving from COVID-19-affected countries in mid-March. Returning residents were tested and quarantined. By widening testing scope, some countries adopted more aggressive and proactive approaches. To detect levels of community transmission, India is testing individuals not traced to coronavirus patients and without symptoms, as opposed to its initial protocol consisting of individuals travelling back from COVID-19-affected countries and immediate contacts. 22 Kenya's Ministry of Health tests all symptomatic individuals and conducts random public screenings. Misinformation threatens COVID-19 response; countries with large numbers of telecommunication lines have addressed misinformation. Nigeria is prohibiting panic-generating advertisements and those implying that certain products have curative and preventative effects. 23 To supplement these efforts, Facebook allowed the World Health Organization and Nigerian Centers for Disease Control and Prevention to deliver coronavirus education on its platform. 23 Countries that have faced outbreaks are familiar with public health emergencies and have learned valuable lessons. Due to cultural beliefs, Ebola patients were considered 'deserving' of the disease, encouraging stigmatisation, fear, blame, guilt and shame. This was extended to Ebola patient contacts, delaying medical consultation and enabling spread. To reduce fear and transmission, Ebola Survivor Corps was founded. 24 Ebola Survivor Corps employed Ebola survivors as health advocates; employees delivered health education to communities and improved access to Sierra Leone's healthcare, rendering them a trusted health information source. 24 Conclusion An increasingly interconnected world facilitates spread of infectious diseases. As challenging as the COVID-19 situation is in the West, if allowed to spread uncontrollably in fragile settings, a more serious scenario is expected. To end the COVID-19 threat globally, thereby preventing it from continuously re-emerging in countries that are currently facing calamitous consequences, developing a global response through coordinated, sustainable and productive strategies, while harnessing the Global South's efforts, is mutually beneficial in defeating this virus that respects no borders. Declarations Competing Interests: None declared. Ethics approval: Not applicable. Guarantor: AM. Contributorship: TO conceptualised and wrote the manuscript, incorporated feedback from co-authors and finalised the manuscript. AM and MC provided feedback and contributed to the concepts. The final manuscript was approved by all authors. The guarantor is AM.
2020-12-08T14:12:01.962Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "fd510e622ed6a9fb10a6fa232b4b370ffae85fc8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0141076820974994", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "76925171ac1b8f086f0ca5eeb5ebff3476fd35fe", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
203923604
pes2o/s2orc
v3-fos-license
Calcium Intake and Serum Calcium Level in Relation to the Risk of Ischemic Stroke: Findings from the REGARDS Study Background and Purpose Data on the association between calcium (Ca) and ischemic stroke are sparse and inconsistent. This study aimed to examine Ca intake and serum Ca levels in relation to risk of ischemic stroke. Methods The primary analysis included 19,553 participants from the Reasons for Geographic And Racial Differences in Stroke (REGARDS) study. A subcohort was randomly selected to create a case-cohort study (n=3,016), in which serum Ca levels were measured. Ischemic stroke cases were centrally adjudicated by physicians based on medical records. Cox proportional hazards regression for the cohort and weighted Cox proportional hazard regression with robust sandwich estimation method for the case-cohort analysis with adjustment for potential confounders were performed. Results During a mean 8.3-year follow-up, 808 incident cases of ischemic stroke were documented. Comparing the highest quintile to the lowest, a statistically significant inverse association was observed between total Ca intake and risk of ischemic stroke (hazard ratio [HR], 0.72; 95% confidence interval [CI], 0.55 to 0.95; Plinear-trend=0.183); a restricted cubic spline analysis indicated a threshold effect like non-linear association of total Ca intake with ischemic stroke (Pnon-linear=0.006). In the case-cohort, serum Ca was inversely associated with the risk of ischemic stroke. Compared to the lowest, the highest quintile of serum Ca had a 27% lower risk of ischemic stroke (HR, 0.73; 95% CI, 0.53 to 0.99; Plinear-trend=0.013). Observed associations were mainly mediated by type 2 diabetes, hypertension, and cholesterol. Conclusions These findings suggest that serum Ca has inverse and Ca intake has threshold effect like association with risk of ischemic stroke. Introduction Because of its favorable effects on blood pressure, 1,2 hypertension, 3,4 type 2 diabetes, 5 insulin resistance, 6 and metabolic syndrome, 7 calcium (Ca) may play an important role in the prevention of ischemic stroke. Results from meta-analyses of studies exploring the association between dietary Ca intake and the risk of ischemic stroke are inconsistent but suggest a non-linear trend. 8,9 In 2013, a dose-response meta-analysis demonstrated an inverse association between dietary Ca intake and ischemic stroke risk in participants with low Ca intake (i.e., <700 mg/day), but a positive borderline association between Ca intake and the risk of ischemic stroke in participants with >700 mg/day. 8 A more recent meta-analysis of 10 cohort studies found no association between dietary Ca intake and total stroke or stroke subtypes but predicted a non-linear association between dietary Ca intake and the risk of total stroke. 9 However, the study found those with high Ca intake from dairy sources and those with longer follow-up period (≥14 years) had reduced risk of total stroke. Although bioavailability of Ca intake is always a concern, studies on the association between serum Ca and the risk of ischemic stroke are sparse. In a prospective study, serum Ca was associated with increased risk of ischemic stroke. 10 Moreover, a meta-analysis of three cohort studies reported a direct association between circulating Ca levels and risk of total stroke. 11 There is a growing concern about the role of Ca intake at high levels in elevating the risk of heart diseases and stroke. [12][13][14][15] Nonetheless, no prospective cohort study has examined both Ca intake and serum Ca levels in relation to the risk of ischemic stroke in the same study. In the present study, we aimed to investigate the association of Ca intake and serum Ca level with the risk of ischemic stroke. In addition, we aimed to investigate whether any observed association was modified by sex, race, and region using data from the Reasons for Geographic And Racial Differences in Stroke (REGARDS) study. Methods The REGARDS study is a longitudinal population-based study among blacks and whites designed to investigate risk factors associated with excess stroke mortality in the USA among blacks and residents of the Stroke Belt region. 16 The prospective participants in the REGARDS study were randomly selected from commercially available nationwide lists purchased through GeneSys, Inc. (Daly City, CA, USA). 17 Prospective participants were sent letters by mail to introduce the study, and then they were contacted by telephone for recruitment between January 2003 and Oc-tober 2007. Demographic data were collected using computerassisted telephone interviews after initial verbal consent. During the subsequent baseline home visits, written informed consents were obtained and dietary and behavioral data were collected by self-administered questionnaires. Physical measurements including weight, height, and cardiovascular health profile (electrocardiography and blood pressures) were recorded and blood samples were collected during baseline first home visits. The REGARDS cohort has been described in more detail elsewhere. 16,18 Of the initially recruited 30,239 participants, 10,509 participants were excluded due to missing data on dietary intake, income, stroke history, or other covariates. Additionally, we excluded 177 participants (n=90 incident hemorrhagic stroke cases, n=87 unidentified stroke cases) leaving 19,553 participants for the cohort analysis ( Figure 1). None of the participants had an implausible value for total energy intake (<500 kcal/day or >5,000 kcal/ day 19 ) in the data included in this study. We also conducted a case-cohort analysis investigating the association of serum Ca levels with the risk of ischemic stroke. The subcohort was randomly selected from the REGARDS cohort using stratified random sampling from each stratum jointly classified by age, sex, race, and region with an overall sampling probability of 9%, and all incident stroke cases were included (n=730). In the case-cohort analysis, a total of 3,016 participants (with 730 incident ischemic stroke cases, 86 of which were in the random subcohort) that had data for serum Ca were included after excluding those with non-ischemic stroke cases (n=55 hemorrhagic stroke), and those with missing values in key covariates (n=293) (Figure 2). The final weighted effective sample size for the case-cohort was 3,102. The REGARDS study Exposure measurement Dietary intake data were collected using a self-administered modified Block 98 food frequency questionnaire (FFQ) that had over 150 multiple choice questions about 107 food items. 18,20,21 The FFQ asked the frequency and amount of intake of each food item per day over the past 1 year at baseline. Nutrients in foods were extracted by NutritionQuest (Berkeley, CA, USA) using the Block nutrient database that was developed based on the United States Department of Agriculture Nutrient Database for Standard References. 22 The amount of each food consumed was calculated by multiplying the reported frequency by the portion size for each food item. The total amount of a nutrient contributed from each food was derived by multiplying the amount consumed by the amount of the nutrient in the given FFQ line item. Nutrients were summed over all FFQ food items to provide estimates for total daily nutrient intake. The Block FFQ had been validated for the assessment of nutrients including Ca with a correlation coefficient greater than 0.6 for Ca between Block FFQ and multiple-day 24-hour recall. 23,24 Total (dietary plus supplementation) and dietary (from food only) Ca intake were reported as milligram per day (mg/day). Supplemental Ca intake was assessed through identification in a medication inventory whereby the participants showed a trained health professional any medication they used at least once in the 2 weeks prior to the first in-home visit, and then Ca was estimated from actual Ca supplements, multivitamins or Ca-containing medications. A blood sample was collected using standard phlebotomy during the first home visit at baseline. Among participants included in the case-cohort study, serum Ca was measured using an automated enzyme colorimetric assay on the Roche Integra instrument (Roche Diagnostics, Indianapolis, IN, USA). The assay range was 0.4 to 100 mg/dL. Inter-assay coefficients of variation ranged from 1.88% to 2.82%. Outcome ascertainment The REGARDS study participants were followed every 6 months via telephone for stroke events and to obtain the reason for hospitalization if a participant was hospitalized. Medical records were obtained and evaluated if the participant reported seeking medical care for stroke events, transient ischemic attack, or death. 25 Once the medical record was received for a stroke event, a team of physicians verified and subtyped the stroke event as ischemic or hemorrhagic. 25,26 Ischemic stroke was defined according to the World Health Organization 27 as sudden focal, at times global neurological deficits lasting >24 hours with signs and symptoms corresponding to the involvement of focal areas of the brain or as a non-focal neurological deficit with imaging consistent with that of ischemic stroke. 16,28 Covariates Data collected using computer-assisted telephone interview included age (continuous), sex (female/male), race (black/white), regions, body mass index (BMI), socioeconomic status (education and income), regular aspirin use (yes or no), smoking, alcohol intake, food intake, and physical activity level. Region was categorized into non-Belt, Stroke Belt (Alabama, Arkansas, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Tennessee), or Stroke-Buckle (coastal plains in Georgia, North Carolina, and South Carolina with high stroke mortality). 29 Sex was self-reported. Education was categorized to less than high school, high school, some college, and college plus. Annual income was categorized to <20, 20 to 34, 35 to 74, and ≥75 USD. BMI was derived from weight in kg divided by height in meters squared. Alcohol use was categorized into three levels based on drinks per week, either none, moderate (0 to 7 drinks/ week in women, 0 to 14 drinks/week in men), or heavy (>7 drinks/week in women, >14 drinks/week in men). Smoking was categorized as never smoker, former smoker, or current smoker. Exercise was categorized into three levels based on the frequency of physical activity per week (none, 1 to 3 times, or 4 or more times). Two blood pressure measurements were taken using aneroid sphygmomanometer during home visits and the mean of the two measurements was recorded. Total calorie intake and vitamin D intake were generated from the FFQ. Other covariates including total cholesterol, high density lipoprotein in ng/mL) was generated from an ancillary study to adjust for in sensitivity analysis in the case-cohort analysis. Because Ca and magnesium (Mg) compete for intestinal absorption and renal reabsorption, we also adjusted for Mg intake in the cohort and serum Mg in the case-cohort models. 30,31 Statistical analysis Analysis of variance (ANOVA; for normally distributed continuous variables), and Kruskal-Wallis test (for non-normally distributed continuous variables and for ordinal variables) were used to compare covariates across Ca quintiles. We used the chi-square test to compare the distribution of categorical variables across Ca quintiles. We compared the risk of ischemic stroke by the levels of total, dietary, and supplemental Ca intake using Cox proportional hazards regression models adjusted for covariates in a sequential manner: model 1 included the exposure of interest, age, sex, race, BMI, region and the interaction of age and race. Model 2 additionally adjusted for education, income, smoking, alcohol consumption, and exercise. Model 3 further adjusted for total energy intake, regular aspirin use, and total Mg intake. The model on dietary Ca intake was adjusted for supplemental Ca intake, and the model on supplemental Ca intake was adjusted for dietary Ca intake, respectively. Model 4 was further adjusted for HDL, and vitamin D intake. Ca was categorized in quintiles in the analyses. To assess trend across quintiles of Ca, the median of each quintile, as a continuous variable, was included in the models. We also examined non-linear associations between Ca and the risk of ischemic stroke non-parametrically with restricted cubic spline analyses. 32 In the cubic spline analysis, we used the median of the first quintile of Ca as the reference and 4 knots. For supplemental Ca intake we used 50 mg/day as the reference category; participants in the first quintile of supplemental Ca did not consume a Ca supplement. We further conducted a stratified analysis to determine whether there was a difference in the associations by sex, race, and region. Interactions between variables in the models were considered significant at P-values ≤0.1. We examined whether hypertension, systolic blood pressure (SBP), fasting blood glucose (FBG), type 2 diabetes, total cholesterol, HDL, LDL, or triglyceride mediated the potential association between Ca and ischemic stroke each at a time using the method described by Hertzmark et al. (2012). 33 This method calculates the proportion mediated as percent mediated = [1-( β2 β1 )] × 100%, where β1 and β2 are regression coefficients from the model with and without intermediator, respectively. The standard error was calculated based on the covariance matrix between the two models. The covariates in the final model of the main analysis were included in the mediation analysis and are assumed to meet the confounding assumptions of mediation analysis. 34 We used the median of each quintile of Ca intake as the exposure and categorized the intermediate variables as a binary variable. We categorized total cholesterol ≥200 mg/dL as high level. We also used continuous forms of total cholesterol, HDL, LDL, SBP, and FBG and binary forms for type 2 diabetes and hypertension for mediation analysis. For the case-cohort analysis, we used a weighted multivariable Cox proportional hazards regression to compute the parameters and robust variance estimates. We weighted noncase subcohort by the inverse of subcohort sampling fraction. Case subcohort was assigned a weight equal to the inverse of subcohort sampling fraction up to shortly before the failure time and a weight equal to 1 from that point to failure time. The non-subcohort cases were assigned a weight of 1 and entered the risk set shortly before failure time. 35 This analysis was based on the method provided by Barlow et al. 36 that has been explained in detail by Kulathinal et al. 35 The proportional hazard regression assumptions were tested by plotting the cumulative martingale residuals against the variable of interest and using the Supremum test. We considered P values ≤0.05 statistically significant if not otherwise specified. We conducted all the statistical analyses in SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). Data availability Any reasonable request for the REGARDS datasets used in this study should be submitted to the REGARDS Publication and Presentation Subcommittee via http://www.regardsstudy.org and the datasets are available upon approval. Results During a mean follow-up of 8.3-year (standard deviation: 3.2), 808 incident cases of ischemic stroke were observed in the cohort. At baseline, participants with higher total Ca intake were more likely to be older, non-smokers, more likely to have lower BMI. Participants with high total Ca had lower diastolic blood pressure and SBP, were less likely to have type 2 diabetes, were more likely to consume alcohol at moderate levels, exercise 4 or more times per week, and have used Ca and Mg supple-ments and aspirin regularly (P<0.05) ( Table 1). When comparing the highest quintile to the lowest quintile of total Ca intake, total Ca was significantly inversely associated with ischemic stroke (hazard ratio [HR], 0.72; 95% confidence interval [CI], 0.55 to 0.95; Plinear-trend=0.183) ( Table 2). In contrast, the restricted cubic spline analysis indicated a non-linear asso-ciation between total Ca intake and ischemic stroke (P for nonlinear trend=0.006) (Figure 3). Both results suggest a threshold effect of Ca on the risk of ischemic stroke. When comparing the highest quintile to the lowest, dietary Ca was not linearly associated with the risk of ischemic stroke (Table 2). A borderline non-linear association was also observed for dietary Ca intake (P for non-linear trend=0.053) (Supplementary Figure 1). In sensitivity analyses using the quartiles and tertiles of total and dietary Ca, the results remained (data not shown). In a mediation analysis, the observed association between total Ca intake and ischemic stroke was mainly mediated through Table 1). Supplemental Ca intake was not significantly linearly associated with the risk of ischemic stroke (HR, 1.03; 95% CI, 0.80 to 1.32; P=0.449) ( Table 2). Restricted cubic spline analysis indicated a significant non-linear association between Ca supplement and the risk of ischemic stroke (P=0.034) (Supplementary Figure 2). Using those without supplemental Ca intake as a reference, the non-linear association remained (data not shown). In the case-cohort, those in the highest quintile of serum Ca consumed higher amounts of dietary Ca, had higher serum Mg, and had higher total cholesterol, higher LDL, higher triglycerides, and higher HDL; were more likely to be female and currently drink alcohol as compared to those in the lowest quartile of serum Ca (P<0.05) ( Table 3). An inverse association of serum Ca levels with ischemic stroke in the case-cohort analysis was observed (HR, 0.73; 95% CI, 0.53 to 0.99; P=0.013) when comparing the highest to the lowest quintile (Table 4). In a sensitivity analysis, further adjusting SBP and type 2 diabetes in the final model, the result remained (HR, 0.71; 95% CI, 0.51 to 0.99; P=0.016). In a sensitivity analysis using data with a reduced sample size (n=1,601, event=602), adjusting for serum The interactions between sex, race or region and Ca in the models were not statistically significant at alpha=0.1. Discussion In this large prospective cohort study, we observed a threshold effect like non-linear inverse association of Ca intake and an inverse linear association of serum Ca levels with the risk of ischemic stroke, independent of potential confounding variables. These associations were mainly mediated through diabetes, hypertension, and cholesterol. Our results are consistent with the findings of two metaanalyses that suggested Ca intake is non-linearly associated with the risk of stroke. 8,9 In the Larsson et al. (2013) 8 metaanalysis, they observed in those with <700 mg/day mean Ca intake, the risk of ischemic stroke was lower by 16% for every 300 mg/day increase in Ca intake while in those with >700 mg/day mean Ca intake, the risk of ischemic stroke was higher by 3%. In the same study, the authors stated that Ca has a beneficial effect in Asian populations with low to moderate Ca intake but not in American or European populations with high Ca intakes. The studies with mean Ca intake >700 mg/day were mostly European with only one American study. None of the individual studies in the Larsson meta-analysis with mean Ca intake >700 mg/day found a significant association. Of note, the sample sizes in all the studies included in the Larsson et al. 8 meta-analysis were small. Unlike previous studies that suggested the possible benefit of dietary Ca in reducing the risk of stroke is observed only among the Asian populations with low to moderate Ca intake, we observed inverse or non-linear inverse associations between Ca and ischemic stroke in Americans. In the Tian et al. (2015) 9 updated meta-analysis, the authors reported an inverse association in the studies with a long duration of follow-up (≥14 years) and in the studies with dairy Ca sources. They did not find a significant association in short duration studies and studies with non-dairy sources of Ca, but the overall pooled result showed a significant inverse association. The difference between our study and the other studies in the meta-analysis might be due to differences in study population (in our case, participants from stroke belt region and Black participants were oversampled); the other potential reasons might be due to differences in adjusted variables. As hypertension and type 2 diabetes were on the path of association, we did not adjust them. The Tian et al. 9 study hypothesized that the role of Ca in stroke might be due to the beneficial effect of Ca on hypertension. In our study, we confirmed that hypertension and type 2 diabetes were two possible mediators of the association between Ca and stroke. Notably, none of the previous studies found a non-linear association, presumably limited by the statistical power. The discrepancy in the association of total Ca and dietary Ca intake with ischemic stroke (a threshold effect like non-linear association of total Ca and less clear pattern and borderline non-linear association for dietary Ca) may be explained by multiple factors, including sources of Ca intake and bioavailability. The absorption of Ca taken from food is affected by the source of the food, which may have Ca bound in oxalate or phytate, while Ca from supplements may be more readily absorbed. 37 Generally, about 30% of Ca from food is absorbed but the bioavailability varies by the source of Ca. 38 Mean (fractional) Ca absorption is directly proportional with intake at high intake. While Ca is mostly actively transported from the intestinal lumen, at high levels passive diffusion is involved in Ca absorption. 39 Individuals that take Ca supplements may have higher socioeconomic status and may be more health conscious as well. 39,40 Though participants with high Ca intake also tend to have high serum Ca, serum Ca is tightly regulated by parathyroid hormone and calcitonin. Parathyroid hormone maintains the level of serum Ca by increasing bone resorption, renal reabsorption, and intestinal absorption by activating vitamin D in the kidney. 41 In contrast, calcitonin reduces serum Ca by prevention of bone resorption. 42 Thus, the level of serum Ca is hormonally regulated by parathyroid hormone, vitamin D and calcitonin and is not only affected by the amount of Ca intake. In the present study, we have adjusted for vitamin D intake in the cohort analysis and in a sensitivity analysis for serum 25(OH)D in the case-cohort analysis. After adjusting for vitamin D, the association for total Ca became stronger but slightly attenuated for serum Ca. We did not have data to adjust for sex hormones but experimental studies indicate sex hormones impact cardiac contractility and Ca homeostasis. 43 The pathophysiology of ischemic stroke involves atherosclerosis that narrows arterial blood vessels supplying blood to the brain or occlusion of the blood vessels with thromboembolus originating in the heart or other blood vessels resulting in reduced blood flow to the neurons. 44 The mechanism through which Ca is related to lower risk of ischemic stroke may include reducing platelet aggregation, 45 blood cholesterol, 46,47 blood pressure, 1,2 and the risk of type 2 diabetes. 48,49 In this study, type 2 diabetes, FBG, SBP, hypertension, HDL, and total cholesterol significantly mediated the associations between Ca intake and ischemic stroke. The mediation by cholesterol was evident after adjusting for vitamin D intake. Previous studies found Ca supplementation reduced insulin resistance. 50,51 The nature of the mediation may be more complex than described in this study; for example, the effect of type 2 diabetes may be through worsening hypertension given that type 2 diabetes affects the vascular system. While increases in intracellular Ca have been implicated in ischemic injury in ischemic stroke, higher serum Ca has been reported to be associated with reduced ischemic tissue injury in ischemic stroke by affecting excitotoxic pathways and ischemic preconditioning. 52 The nonlinear association between Ca intake and the risk of ischemic stroke might be explained by the inadequacy of the protective physiologic benefits at low intake and the dominance of pathologic effects such as cerebrovascular calcifications and atherosclerosis at high intake. 53 In addition, the pattern and source of Ca intake might be different at low and high intakes. Those with inadequate Ca intake might be consuming Ca through dietary sources low in Ca while those with high total intake might be taking high levels of Ca supplements, which might increase the risk of arterial calcification. There is evidence of increased cardiovascular events with increased supplemental Ca intake, but not with increased dietary Ca intake. 12 There are some limitations relevant to the interpretation of the results of this study. First, because of the small number of hemorrhagic strokes, we could not conduct subtype analysis. Second, the study was restricted to only blacks and non-Hispanic whites, which may limit the generalizability of the results to other races. Third, we cannot rule out residual confounding, for example, by type of supplement, and sex hormones given the study is an observational study. Fourth, Ca intake and serum Ca were measured only once, and we could not account for change in dietary habits over the follow-up period. Finally, for the case-cohort analysis, serum Ca was not corrected for albumin-bound Ca, and for the cohort analysis, bioavailability for Ca intake that varies by the food sources, supplement type, and participant characteristics was not accounted for. This study also has several strengths. The prospective cohort design reduced recall bias and the long duration of follow-up with a large number of cases of ischemic stroke conferred enough power to detect clinically meaningful differences. It is unique in that we tested the non-linear association of Ca intake with the risk of ischemic stroke. Previous meta-analyses have suggested that Ca was non-linearly associated with the risk of ischemic, but to the best of our knowledge, this is the first original prospective cohort study to examine non-linear associations between Ca intake and the risk of ischemic stroke. We also conducted case-cohort analysis on the association of serum Ca with the risk of ischemic stroke to corroborate the findings from the nutrient intake with generally consistent results, which further strengthened our study. Furthermore, we confirmed the mediation of the association between total Ca intake and ischemic stroke by type 2 diabetes and hypertension. Conclusions In conclusion, Ca intake has a threshold effect like non-linear association with risk of ischemic stroke. Type 2 diabetes and hypertension may mediate the association. Given the growing concern on the role of Ca intake in cardiovascular and cerebrovascular health and the variable results from individual studies, larger prospective cohort studies with repeated measurements of Ca intake, and randomized controlled trials are needed to elucidate the beneficial and detrimental role of Ca intake. Supplementary materials Supplementary materials related to this article can be found online at https://doi.org/10.5853/jos.2019.00542. Disclosure The authors have no financial conflicts of interest.
2019-10-03T09:03:12.263Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "d51e93d07d6363da0952154727be6508f4f09e10", "oa_license": "CCBYNC", "oa_url": "http://www.j-stroke.org/upload/pdf/jos-2019-00542.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fd4b8159fe50d0a17f3c264101fa6acf1898040", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119128896
pes2o/s2orc
v3-fos-license
Filters and Matrix Factorization We give a number of explicit matrix-algorithms for analysis/synthesis in multi-phase filtering; i.e., the operation on discrete-time signals which allow a separation into frequency-band components, one for each of the ranges of bands, say $N$, starting with low-pass, and then corresponding filtering in the other band-ranges. If there are $N$ bands, the individual filters will be combined into a single matrix action; so a representation of the combined operation on all $N$ bands by an $N \times N$ matrix, where the corresponding matrix-entries are periodic functions; or their extensions to functions of a complex variable. Hence our setting entails a fixed $N \times N$ matrix over a prescribed algebra of functions of a complex variable. In the case of polynomial filters, the factorizations will always be finite. A novelty here is that we allow for a wide family of non-polynomial filter-banks. Working modulo $N$ in the time domain, our approach also allows for a natural matrix-representation of both down-sampling and up-sampling. The implementation encompasses the combined operation on input, filtering, down-sampling, transmission, up-sampling, an action by dual filters, and synthesis, merges into a single matrix operation. Hence our matrix-factorizations break down the global filtering-process into elementary steps. To accomplish this, we offer a number of adapted matrix factorization-algorithms, such that each factor in our product representation implements in a succession of steps the filtering across pairs of frequency-bands; and so it is of practical significance in implementing signal processing, including filtering of digitized images. Our matrix-factorizations are especially useful in the case of the processing a fixed, but large, number of bands. Introduction Our purpose is to establish factorization of matrices M N (A) over certain rings A of functions, among them the ring of polynomials, and the L ∞ functions on the circle group T.An equivalent formulation is the study of functions on T which take values in the N × N scalar matrices.The general setting is as follows: Fix N , and consider the group SL N (A) where the "S" is for determinant = 1.The object is then to factor arbitrary elements in SL N (A) as alternating products of upper and lower triangular matrix functions; equivalently, upper and lower triangular elements in M N (A) with the constant 1 in the diagonal. In digital signal or image-processing one makes use of subdivisions of various families of signals into frequency bands.This is of relevance in modern-day wireless signal and image processing, and the choice of a number N of frequency bands may vary from one application to the next. There is a certain representation theoretic framework which has proved successful: one builds a representation of the basic operations on signals, filtering, down-sampling (in the complex frequency variable), up-sampling, and dual filter.These operations get represented by a system of operators in Hilbert spaces of states, say H. A multiresolution (see Fig. 1) then takes the form of a family of closed subspaces in H.In this construction, "non-overlapping frequency bands" correspond to orthogonal subspaces in H; or equivalently to systems of orthogonal projections.Since the different frequency bands must exhaust the signals for the entire system, one looks for orthogonal projections which add to the identity operator in H.This leads to the study of certain representations of the Cuntz algebra O N , details below.Since time/frequency-analysis is non-commutative, one is further faced with a selection of special families of commuting orthogonal projections.When these iteration schemes (repeated subdivision sequences) are applied to the initial generators, one arrives at new bases and frames; and, in other applications, to wavelet families as recursive scheme. Our study of iterated matrix-factorizations are motivated by such questions from signal processing, and arising in multi-resolution analyses.In this case, elements in the group SL N (A) of matrix-functions act on vector-functions f in a complex frequency variable, where the components in f correspond to a specified system of N frequency-bands.When a matrix-factorization is established, then the action of the respective upper and lower triangular elements in M N (A) are especially simple, in that a lower triangular filter filters a low band, and then adds it to one of the higher bands; and similarly for the action of upper triangular matrix functions. Our analysis depend on a certain representation of the Cuntz algebra O N , where O N is an algebra generated by the basic operations on signal representations, filtering, down-sampling (in the complex frequency variable), up-sampling, and dual filter; see Fig 1. Factorization Algorithm In order to illustrate our use of representations of the Cuntz algebra O N in algorithms for factorization, we begin with the case of N = 2.The skeleton of these algorithms has three basic steps which we now outline. The Algorithm where F is some fixed ring of functions defined on a subset Ω ⊂ C such that T ⊂ Ω. Step 1: and set Let S i , i = 0, 1 be For the corresponding adjoint operators we therefore get: where the summation in (2), (3) are over points z, ω ∈ T. Then (S i ) i=0,1 are isometries in L 2 (T), and S * i S j = δ i,j I, 1 i=0 S i S * j = I where I denotes the identity operator in the Hilbert space L 2 (T).We will want F to be a ring of meromorphic functions, such that they are determined by their values on T = {z ∈ C, |z| = 1}; or we are simply working with functions on T. We now assume S i F ⊂ F, and S * i F ⊂ F, for all i = 0, 1. Step 3: Having form L, from (4) we get Step 4: and we get 1 f 1 and continue. Factorization Cases In the infinite-dimensional group SL 2 (L ∞ (T)), consider elements A with factorization as in (7): Optimal u, v = T uv with respect to Haar measure on T. So any functions we pick the one with f 1 attaching its minimum in (12) inf{( 12)|factorization ( 14) holds} ( 15) Set detA = 1, Solving for matrices A (1) in ( 16), we get With the above L in (18) we see that is the optimal factorization with a lower matrix as a left-factor. then the optimal solution (18) to the factorization problem Proof.When the function L in ( 18) is used in the computation of we see that for any z ∈ T, ((S * 0 f 1 )(z), (S * 1 f 1 )(z)) in C is in the orthogonal complement of (A(z), B(z)); indeed with (18) we get i.e., a pointwise identity for functions on T. 18) is 0 and so A = A (1) so the factorization steps. Proof. so unitary makes that the rows are orthogonal AC +BD = 0 in the inner product on C 2 z, w = z 1 w 1 + z 2 w 2 and |A| 2 + |B| 2 = 1. Note this using the repeated on any A (1) ∈ SL 2 (L ∞ (T)) each time pick L such that the infimum in ( 12) is attained. With the same argument, we factor matrix such that (35) holds. in (35).If in ( 35) is then, U = 0 ⇒ A = A (2) .Then following factorization results: factor out lower matrix on the left −→ factor out upper matrix on the left Or equivalently, ), and the factorization resulting from an iteration of the algorithm from (29).Then the last factor in (30) is of diagonal form if and only if the following hold: There are functions ϕ, ψ ∈ L 2 (T) such that and, in this case, the last factor in ( 30) is as follows: Proof.This follows from (29), and the Cuntz-relations: Factorizations We fix a value of N > 1 (i.e., the given number of frequency bands), and we begin with the formula for a canonical system of N isometries S i which define an associated representation of the Cuntz algebra O N .Said differently: The system of isometries {S i } satisfies the Cuntz relations with reference to the Hilbert space L 2 (T) where T is the circle group (one-torus) with its normalized invariant Haar measure.When the value of N is fixed, then the multi-resolution filters will then take the form of N × N matrix functions; the matrix entries might be polynomials, or, more generally, functions from L ∞ (T).Hence the questions about matrix factorization depends on the context.In the case of polynomial entries we will make use of degree, but this is not available for the more general case of entries from the algebra L ∞ (T).In every one of the settings, we develop factorization algorithms, and the particular representation of the Cuntz algebra will play an important role. The standard representation of O N , which we will use below, is given by the system of isometries {S j } as follows: Lemma 3.4.[16] Let N ∈ Z + be given and let F = (f j ) j∈Z + be a function system.Then F ∈ OF N if and only if the operators S j 34) satisfy where I denotes the identity operator in H = L 2 (T). We say that the isometries {S j } j∈Z N define a representation of the Cuntzalgebra O N , (S j ) ∈ Rep(O N , L 2 (T)). Lemma 3.5.[16] Let N ∈ Z + be fixed, N > 1, and let A = (A j,k ) be an N × N matrix-function with A j,k ∈ L 2 (T).Then the following two conditions are equivalent: (ii) A i,j = S * j f i where the operators S i are from the Cuntz-relations (35, 36). Proof.(i) ⇒ (ii).Writing out the matrix-operation in (i), we get Using S * j S k = δ j,k I, we get A i,j = S * j f i which is (ii).Conversely, assuming (ii) and using j S i S * j = I, we get j S j A i,j = f i which is equivalent to (i) by the computation in (37) above.Theorem 3.6.(Sweldens [25], [16]) Let A ∈ SL 2 (pol), then there are l, p ∈ Z + , K ∈ C \ {0} and polynomial functions U 1 , . . ., U p , L 1 , . . ., L p such that The filter algorithm corresponding to the matrix-factorization in (38) is as follows: And in steps: Remark 3.7.[16] Note that if then one of the two functions α(z) or δ(z) must be a monomial. The 2 × 2 case: Polynomials [16] To highlight the general ideas, we begin with some details worked out in the 2 × 2 case; see equation (28). To get finite algorithms, we should assume in the present subsection that the matrix-entries are polynomials. First note that from the setting in Theorem 3.6, we may assume that matrix entries have the form f H (z) as in section 3 but with H ⊂ {0, 1, 2, • • • }, i.e., f H (z) = a 0 + a 1 z + • • • .This facilitates our use of the Euclidean algorithm. where U is a unitary matrix-function, where and where Let U represent scalar valued matrix entry in a matrix function.We now proceed to determine the polynomials U 1 (z), L 1 (z), • • • , etc. inductively starting with where U and B are to be determined.Introducing 42), this reads But the matrix function is given and fixed see Remark 3.7.Hence is also fixed.The two polynomials to be determined are u and h in (43).Carrying out the matrix product in (43) yields: where we used the orthogonal splitting from Lemma 3.4.Similarly, from (44), we get and therefore γ = k 0 and δ = k 1 , by Lemma 3.5. Collecting terms and using the orthogonal splitting (45) we arrive at the following system of polynomial equations: or more precisely, It follows that the two functions u and h may be determined from the Euclidean algorithm.With (41), we get Remark 3.8.[16] The relevance of the determinant condition we have from Theorem 3.6 is as follows: Substitution of (46) into this yields: Solutions to (46) are possible because the two polynomials δ(z) and γ(z) are mutually prime.The derived matrix h 0 h 1 γ δ is obtained from A via a row-operation in the ring of polynomials. For the inductive step, it is important to note: The next step, continuing from ( 43) is the determination of a matrix-function C and three polynomials p, q, and L such that Here The reader will notice that in this step, everything is as before with the only difference that now 1 0 L 1 is lower diagonal in contrast with in the previous step.This time, the determination of the polynomial p in (50) is automatic.With (see ( 45)) and we get the following system: So the determination of L(z) and q(z) = q 0 (z 2 ) + zq 1 (z 2 ) may be done with Euclid: Combining the two steps, the comparison of degrees is as follows: Two conclusions now follow: (i) the procedure may continure by recursion; (ii) the procedure must terminate. Remark 3.9.In order to start the algorithm in (47) with direct reference to Euclid, we must have where ) > deg(α). Then determine a polynomial L such that We may then start the procedure (47) on the matrix function If a polynomial U and a matrix function B is then found for holds; and the recursion will then work as outlined. In the following, starting with a matrix-function A, we will always assume that the degrees of the polynomials (A i,j ) i,j∈Z N have been adjusted this way, so the direct Euclidean algorithm can be applied. The 3 × 3 case The thrust of this section is the assertion that Theorem 3.6 holds with small modifications in the 3 × 3 case. Comments: In the definition of A ∈ SL 3 (pol), it is understood that A(z) has detA(z) ≡ 1 and that the entries of the inverse matrix A(z) −1 are again polynomials. Note that if L, M, U and V are polynomials, then the four matrices Theorem 3.10.[16] Let A ∈ SL 3 (pol); then the conclusion in Theorem 3.6 carries over with the modification that the alternating upper and lower triangular matrix-functions now have the form (55) or ( 56)-( 57) where the functions L j , M j , U j and V j , j = 1, 2, • • • are polynomials. The N × N case Below we outline the modifications to our algorithms from the 2 × 2 case needed in order to deal with filters with N (> 2) bands, hence factorization of N × N matrix functions.The main difference when the number of frequency bands N is more than 2 is that in our factorizations, both the lower and the upper triangular factors, must take into account operations which cross between any pair of the total system of N frequency bands. Theorem 3.11.[16] Let N ∈ Z + , N > 1, be given and fixed.Let A ∈ SL N (pol); then the conclusions in Theorem 3.6 carry over with the modification that the alternative factors in the product are upper and lower triangular matrix-functions in SL N (pol).We may take the lower triangular matrix-factors and the upper triangular factors of the form U = (U i,j ) i,j∈Z N with Note that both are in SL N (pol); and we have Step 1: Starting with A = (A i,j ) ∈ SL N (pol).Then left-multiply with a suitably chosen U N (−U ) such that the degrees in the first column of U N (−U )A decrease, i.e., In the following, we shall use the same letter A for the modified matrix-function. Step 2: Determine a system of polynomials L 1 , • • • , L N −1 and a polynomial vector-function or equivalently Step 3: Apply the operators S j and S * j from section 3 to both sides in (63).First (63) takes the form: For i = 1, we get By (62) and the assumptions on the matrix-functions, we note that the system (64) may now be solved with the Euclidean algorithm: with the same polynomial For the polynomial function f 1 we then have i.e. The process now continues recursively until all the functions Step 4: The formula (63) translates into a matrix-factorizations as follows: With L and F determined in (63), we get as a simple matrix-product taking B = (B i,j ) and where we used Lemmas 3.4 and 3.5. Step 5: The process now continues with the polynomial matrix-function from (67) and (68).We determine polynomials U Step 6: As each step of the process we alternate L and U ; and at each step, the degrees of the matrix-functions is decreased.Hence the recursion must terminate as stated in Theorem 3.11. L ∞ (T)-matrix entries. While the case N = 2 is motivated by application to the high-pass v.s.low-pass filters, may result for the N = 2 case carry over.To see this, we first define the Cuntz-algebra O N in general the relations are when the elements (S i ) N −1 i=0 are given symmetrically.Each case (69) has many representations; for example if (m i (z)) N −1 i=0 , z ∈ T, is a system of filters corresponding to N frequency bands, we may obtain a representation of O N acting on the Hilbert space L 2 (T) as follows (S i ψ)(z) = m i (z)ψ(z N ), ∀z ∈ T, ψ ∈ L 2 (T). (70) For i ∈ {0, 1, • • • , N − 1}, the adjoint operator of S i in (70) is A direct verification shows that the Cuntz-relation (69) are satisfied for the operators (S i ) N −1 i=0 in (70) if and only if the system (m i ) N −1 i=0 is a multi-band filter covering the N frequency bands. Proof.With the arguments above, in the space O N of N = 2, we now get matrix, the system: S * j f 0 = g 0,j , L i g 0,j + S * j f i = g i,j , and for i = 1, 2, • • • , N − 1, which is desired conclusion. 3.6 Optimal factorization in the case of SL N (L ∞ (T)) Fix N > 2, and consider the usual inner product in C N , z, w :=
2014-12-09T20:27:02.000Z
2014-12-09T00:00:00.000
{ "year": 2014, "sha1": "242a5131411e231e352407483912e570490b9cd0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "242a5131411e231e352407483912e570490b9cd0", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
5040728
pes2o/s2orc
v3-fos-license
HCI meets Material Science: A Literature Review of Morphing Materials for the Design of Shape-Changing Interfaces With the proliferation of flexible displays and the advances in smart materials, it is now possible to create interactive devices that are not only flexible but can reconfigure into any shape on demand. Several Human Computer Interaction (HCI) and robotics researchers have started designing, prototyping and evaluating shape-changing devices, realising, however, that this vision still requires many engineering challenges to be addressed. On the material science front, we need breakthroughs in stable and accessible materials to create novel, proof-of-concept devices. On the interactive devices side, we require a deeper appreciation for the material properties and an understanding of how exploiting material properties can provide affordances that unleash the human interactive potential. While these challenges are interesting for the respective research fields, we believe that the true power of shape-changing devices can be magnified by bringing together these communities. In this paper we therefore present a review of advances made in shape-changing materials and discuss their applications within an HCI context. INTRODUCTION Current consumer devices, such as laptops, smartphones and wearable devices, often have form factors that are determined by their use of flat, planar display technology. In recent years, the availability of thin-film flexible displays and smart materials has enabled HCI researchers to actively explore organic [88], morphing [148,198] and more expressive interactive forms. From interactive spherical displays [16] to mobile Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. phones that bend to notify a user of an incoming call [75], and to pneumatic interfaces that expand to become exoskeletons or couches [202], there are many recent examples of shaped interface design in the literature (see [191] for an overview). From a design perspective, this transition from flat to shaped interfaces is more complex than simply readjusting current practices. The introduction of shape blends the boundary between interaction and industrial design [87], and requires a next-generation designer to conceptualise interaction in an object's form. This change puts a greater need for HCI practitioners to learn about and adapt the advances made in material science and to quickly apply them towards shaped devices. The maturation of 3D printing and its influence on prototyping [152], and the emergence of thin-film displays and their relationship to bendable interfaces [110], are testament to the fact that the advances in material science can have a direct impact on HCI research. In this paper, we discuss how the evolving relationship between HCI and material science can be framed, and why synergies between the two fields are critical for the design of shape-changing devices. As a first step, we contribute a review of developments in shape-changing material science to establish a baseline literacy and to make recent work from material science available to the HCI community. Within this review we make a systematic exploration of recent developments in material science, focusing on the technologies applicable to the context of shape-changing devices, including stretchable structures, deployable systems, variable stiffness materials and shape memory materials and discuss their potential application for morphing interactive devices, with the aim of building a bridge between material science and HCI. Examples are provided from the broad HCI literature where these technologies have already been implemented. Finally, we discuss the challenges in bridging the gap and propose a way forward. Our contribution is a road map for designers who want to learn more about the advances in material science and use them for the design of shape-changing interfaces. REVIEW OF SHAPE-CHANGING MECHANISMS In this section we present a review of the outputs from material science that could enable morphing capabilities for shapechanging devices. Thill et al. [236] present a review of shapechanging concepts for aircraft morphing skins. This work is comprehensive and highly cited in its field. We have therefore used the shape-changing categories from this paper that we consider to be most applicable to HCI; namely stretchable structures, deployable structures, variable stiffness materials and shape memory materials. We then discuss how the different technologies within these categories could be applied within the HCI community. Table 1 contains the key material science references within this paper and summarises outputs from HCI with respect to each shape-changing mechanism. Several review papers have been published that define the different types of shape-change and their applications. Organic User Interfaces (OUIs) describes how computer interfacesno longer limited to rigid flat surfaces-can exhibit shape, deformation and non-planar forms [88]. Rasmussen et al. [191] present a review of existing work on shape-changing interfaces and identify eight different types of deformation. Roudaut et al. [198] propose the term shape-resolution that extends the definition of display resolution to shape-changing interfaces. It is based on the model of Non-Uniform Rational B-splines (NURBS) and has ten features to characterise shapechange mathematically. Coelho et al. [40] adopted a more technology driven approach with their taxonomy describing the technological properties of shape-changing devices. Examples include power requirements, the ability to memorise new shapes and input stimulus, such as voltage potential, or the ability to sense deformations. In this paper, rather than focusing on shape-change from an HCI perspective, we review current state-of-the-art morphing technologies that have been developed to meet desirable shape-change within engineering industries (e.g. aerospace). We also discuss how these technologies may be harnessed by HCI researchers for the development of shape-changing interactive devices. Stretchable Structures Stretchable structures are the basis of shape-changing devices that require large changes in surface area and rely on a material/structure that is compliant enough to allow for large-scale deformation. In this section, two classes of stretchable structures are discussed: elastomers and auxetic materials. Elastomers Elastomers, such as silicone, are a type of polymer that have historically been used in seals and adhesives, gloves, tyres, toy balloons, rubber bands, in shock absorbers and in moulded flexible parts. These materials are typically safe in their end form, widely available and are relatively easily manipulated by the user. Their main advantage is their low elastic modulus (i.e. the resistance of a material to being stretched, also known as Young's Modulus) which enables them to easily deform (i.e. strain) up to 1000% of their original length [32,236], resulting in large achievable changes in topology. Due to their weak intermolecular structure, elastomers can stretch easily without large forces and return to their original shape when the force is removed, thus exhibiting a shape-memory effect [107]. Depending on their molecular structure, elastomers fall into one of two categories: thermoset (such as silicone rubber) or thermoplastic. Whilst thermoset elastomers are typically stronger, thermoplastics are melt-processable and easily recycled, lending themselves to manufacturing processes such as 3D printing [223]. The properties of these materials display non-linear behaviour, meaning that different shape-changing responses may be achieved depending on how quickly the material is stretched and on the surrounding temperature [27], indicating the need for tightly controlled user conditions to ensure repeatability and minimal degradation of the material. The glass transition temperature (T g ) is a particularly important characteristic of polymers and is the temperature region at which a polymer transitions from a brittle, glassy state to a soft, rubbery state [27]. Elastomers, therefore, must be used at a temperature well above their T g point in order to exhibit the desired characteristics for shape-change. In recent years, elastomers have been developed for new applications including artificial muscles [6,176], soft robotics [141,216,242,246] and stretchable electronics [111,112,212,213]. Silicone has frequently been used in the development of pneumatically actuated soft robots due its durability and the ability to fabricate internal chambers that can be inflated and deflated to enable specific deformation patterns from a single pressure source [98,142,151]. Advances in stretchable electronics have been reviewed in [197], and the authors suggested that these technologies show promising potential for several HCI applications including sensory skins for robotics, structural health monitors and wearable communication devices. Elastomers within HCI Although many instances of stretchable structures within the HCI community have been reported using fabrics [146,173,195,199], few studies have employed elastomers alone. Polydimethylsiloxane (PDMS), a silicone-based organic polymer, served as the base material of iSkin [247] and Stretchis [248] and enabled the development of stretchable user interfaces for sensing and display. A significant drawback of elastomers, particularly for shape-change, is the trade-off between stretchability and strength. Typically, a material that is able to undergo large increases in surface area has low strength (therefore not particularly robust) and vice versa [221], which has likely prevented their use in larger scale prototypes by HCI researchers without being coupled with another component. However, elastomeric polymers show particular promise for achieving high resolution shape-change due to their highly deformable nature. Some of these materials are costly and require complex knowledge of polymer processing (which has potentially been an inhibitor in their adoption in shape-changing research). Nonetheless, many are still easily accessible to HCI researchers and do not require any specific skills or equipment other than the ability to mix together two components in a specified ratio. Auxetic materials have been synthesised as foams [29,30,204,207], ceramics [126], composites [36,60], crystals [12,79,264] and polymers [3,22,181,192], with re-entrant honeycomb cellular structures being the most extensively researched to date [149,206,208,261]. The hexagonal lattice shown in Figure 1 is perhaps the simplest form of an auxetic cellular arrangement, however, other tessellating geometries, such as chiral [2] and rotating unit [78,90], are also capable of creating an auxetic shape-changing mechanism. Further attempts have been made to develop smart auxetic structures from shape memory alloys (SMAs), in order to introduce some multifunctional capability such as actuated shape-change [205]. An interesting characteristic of auxetics for shape-change is that these structures display synclastic curvature when bent. In other words, unlike non-auxetic structures which form a saddle shape when bent (Figure 2a), auxetic materials form a domeshape devoid of any crimps (Figure 2b), making them suitable for designing and building structures with complex curvatures and shapes [57,203]. In [119] the authors exploited this characteristic to physically realise complex surfaces such as shoes, sculptures, face masks and clothing via auxetic linkages by introducing cuts into the material so that the elements formed could rotate relative to each other in an auxetic manner. Auxetic Materials within HCI Auxetic materials display many advantageous properties, particularly for shape-changing applications, such as the ability to achieve large changes in surface area, a high resistance to fracture (i.e. robustness) and high energy absorption [78,261]. However, their widespread use has largely been limited by the complex procedures to generate these materials using traditional manufacturing methods. With recent advances in 3D printing, the potential exists to more readily fabricate cellular designs to introduce shape-changing functionality, particularly in the case of 3D re-entrant structures [118,149,153,260]. For example, Theodoros et al. [235] 3D printed an auxetic structure that could be pneumatically actuated to achieve a change in curvature, and in [99] the authors 3D printed a metamaterial door latch from a single block of NinjaFlex (a flexible TPU filament) that enabled rotary movement of the handle to be transformed into linear motion of the latch. Although the modelling of the precise deformation of these structures may be complex, the lack of awareness of auxetics as a means for shape-change is likely to be the main reason for the scarcity in HCI literature. Yet, with the developments in 3D printing and extensive literature detailing auxetic designs, these structures are now more accessible to the HCI community as there is no longer a need for expensive equipment or materials. Auxetic materials therefore show promising potential for shape-change, particularly when a degree of curvature is required. [201] (© 2014 image reproduced with permission from Elsevier). Deployable Structures Deployable shape-changing mechanisms enable structures and devices to be stowed for transport or easy handling and expanded when required. In the case of shape-changing devices this may also enable a change in function of the device depending on the current configuration. In this section, three classes of deployable shape-changing mechanisms are discussed: rollable, foldable and inflatable. Rollable Structures Examples of rollable mechanisms found in everyday use include roller blinds, garage shutters and for efficient storage of fabrics and other flexible materials. A large proportion of research on rollable structures to date has focused on space applications [239], most notably for in-space deployment of lightweight solar sails with minimal packing volume during launch. Rollable carbon-fibre reinforced polymer (CFRP) booms were developed by DLR for an Earth-orbiting Solar Power Satellite (SPS) [211] that consisted of two laminated sheets in an Ω-shape bonded together to form a tubular structure. Flattening this shape made it easier to bend in one direction, enabling the booms to be coiled for storage [226]. Araromi et al. [7] created a microsatellite gripper that utilises a pre-stretched, rollable dielectric elastomer membrane bonded to a flexible but inextensible frame that remains rolled-up until the pre-stretch is released during deployment. Another advantage of some rolled, deployable devices such as a carpenter's tape, is that they can automatically uncoil and then snap into a stable uncoiled configuration that is load-bearing [180]. An exciting prospect for rollable structures is in the development of flexible electronics, with advances in thin, flexible display technology enabling the concept of rollable displays to be explored [143]. This is perhaps the application of rollable structures that has been of most interest within HCI to date. Several studies have focused on developing the technology, including [34,97,143,229,252]. Typically metal foils, ultrathin glasses and plastic films are considered ideal materials for the flexible substrate, with rollable polymer films showing particular promise due to their flexibility, low cost and excellent optical clarity, in addition to being amenable to a wide range of manufacturing processes such as roll-to-roll processing and inkjet printing [103]. However, they tend to suffer from a high coefficient of thermal expansion (CTE), i.e. there is a large change in the material size with changes in temperature, which can make integration with display layers a challenge [35]. Rollable Structures within HCI Several studies investigate the user interaction with rollable devices and explore the physical modes of interaction [110,154,224]. However, the authors all noted that despite recent developments in flexible displays, the components required to realise such devices are still unavailable and the focus of their studies landed primarily on the interaction with the rolling mechanisms rather than the display technology itself. Furthermore, the flexible nature of the inorganic thin films used in electronic devices tends to promote brittle failure [133], highlighting that the challenges in utilising rolling structures for shape-changing interfaces lie not only with HCI researchers, but also with material scientists and the need for further developments in the materials technology. Foldable Structures The high strength-to-weight ratio of folded objects enables the development of thin, lightweight, hollow, shape-changeable geometries that can be easily deployed into 3D and flattened into 2D for storage and transport [166]. In recent years, engineers have looked to the ancient Japanese tradition of origami to inspire the evolution of engineering structures that can be fabricated, assembled, stored and morphed in unique ways [72,177]. The applications of foldable structures are widespread and have been implemented in an array of practical scenarios including in the design and deployment of solar sails [211] and space telescopes [55,249], in sandwich panel cores [66,85], in the folding of sheet metal [53], in packaging and containers [52,251], in robotics [62,89,127,167], biomedical devices [65,122,190] and electronics [80,96,161,218,228,259]. In origami mathematics, a fold is regarded as an ideal surface having zero-thickness where any deformation of the surface does not result in stretching, contraction or self-intersection. The location of these folds are known as creases and they, in addition to the direction, magnitude and sequences of folding, determine the shape of the structure [49,177]. When implementing origami to create physical structures, the surface no longer maintains zero-thickness and the magnitude of a fold is described by the folding angle and the radius of curvature at the fold line. To generate these folds, one of two approaches must be taken; local bending of the material or the use of a hinge. The latter concept is referred to as rigid origami, where the facets and crease lines are replaced with panels and hinges, and is particularly relevant for large shape-changing structures that require a high degree of stiffness [230]. For certain applications, such as in remote locations, at very small or large length scales, or where packaging is complex and requires automation, the capability of these structures to self-fold becomes essential. The active folding of structures has been reported using shape memory materials [241], polymer swelling [101] and magnetic fields [62], and an extensive review of self-folding mechanisms for both hinge and bending type folds using active materials are reported in [177]. For example, in [83] the authors developed a self-folding sheet consisting of triangular tiles of glass-fibre impregnated with resin, that uses SMAs attached to the upper and lower surfaces to enable actuation. To address the issue of sheet thickness, an elastomer is used for the joints and a series of magnets enable folding into any polyhedral shape ( Figure 3). Felton et al. [62] presented self-folding shape-memory composites made from two outer layers of prestretched polystyrene (a shape-memory polymer (SMP)), two layers of paper and a printed circuit board (PCB) made from polyimide and copper. When heated, the SMP contracts causing the composite to fold. SMPs have also been used to create a number of different self-folding composite shapes; from boxes to Miura-ori patterns [241]. The folding and unfolding of structures becomes increasingly complex with increasing number of folds due to the expanding number of folded possibilities [139]. A well-known example of origami folding that was initially created to efficiently pack and deploy solar panels for space missions is the Miura-ori tessellated folding pattern [150]. This herringbone pattern consists of a series of convex mountain and concave valley creases that enables the entire structure to be folded or unfolded simultaneously, avoiding the complexity of folding sequences [219]. Schenk et al. [209] formed folded cellular structures by stacking individual Miura-ori sheets that were bonded along the joining fold lines, in which the folding kinematics were preserved. By varying the unit cell geometry within each layer, the authors developed a self-locking, folded cellular structure, where the motion could be halted in a predetermined configuration. Adopting this idea into shape-changing devices may enable its mechanical functionality to be altered on demand. In addition to folding, kirigami also permits cutting to obtain 3D shapes from 2D sheets, enabling not only changes in shape, but also large changes in volume with highly tailorable properties in each axis [159]. Neville et al. [159] detailed the manufacturing process of honeycomb kirigami structures that involves a sequence of cutting, corrugating and folding. Although they used Polyetheretherketone (PEEK) to demonstrate their design, the authors suggested that this method could be applied to any material, provided that it can be cut and folded. They showed that a series of threaded cables was a simple method of deforming the structure (Figure 4), but smart materials such as SMPs could also be used for actuation [158]. Dias et al. [51] also demonstrated that mechanical actuators can be designed that enable roll, pitch, yaw and lift, by tuning the arrangement and location of cuts within thin elastic sheets. Foldable Structures within HCI Many instances of origami folding can be found within HCI, such as within fashion and textiles to create shape-changing skirts [179] and jackets [137], and in the development of interactive objects [33]. Origami mechanisms have also been used to create interactive folding displays [76,109,129] and input devices [70,116]. For example, Olberding et al. [166] introduced Foldio, a fabrication technique for Foldable Interactive Objects, i.e. folded sheets of paper, plastic or cardboard, that can sense user input such as touch and deformation, and display output and actuated shape change through printed electronics. They demonstrated this technology through a series of applications including interactive packaging and furnishing, paper prototyping, custom-shape input and output devices, such as a game controller, and a shape-changing display. The broad literature available shows that foldable structures are not a new concept in HCI, particularly with regards to origami-inspired devices. The basics of folding are easy to implement, reproducible and the shape-changing mechanism can be achieved with almost any compliant material. However, they require an external structure to guide them into shape and as a result, the speed and maximum displacement of the shapechange is highly linked to the actuation method [177]. Another limitation is that these structures are considered membrane materials, i.e. to enable better foldability, a high bending stiffness must be avoided, leading to a compromise in structural performance (strength and stiffness) as the structures can only take load in tension and not in compression [53,54]. Such trade-offs are likely to impact the characteristics of the device, but we expect to see more innovative solutions as we move towards higher fidelity developments and gain an improved understanding of the kinematics of folding. By cutting as well as folding (kirigami), it may be possible to create devices that can achieve large changes in volume as well as surface area. Inflatable Structures An inflatable is a structure that can be inflated with gas, normally air, helium, hydrogen or nitrogen. Their popularity is largely due to their low weight and ability to pack into small volumes, conforming to almost any shape that can be deployed when required. Other advantages include their low production cost, high strength due to the large surface area over which they are able to absorb loads, and high reliability of deployment [28]. As a result, they have found their way into many applications including vehicle wheels, furniture, inflatable boats and buoyancy systems [81], airbags [23,233], membrane roofs [9,105,165], soft robotics [151,160,196,245,246], medical treatments [10,14,147,234] and for entertainment. Extensive research has also been conducted in the use of inflatables for space-based devices [69]. For example, small UAVs that can be tightly packed for ease of transportation or launching and that have significant robustness to withstand impact on landing, are of interest for military operations. This includes vehicles that are inflated on site and hand-launched, those which are gun-launched and inflate in flight, and ejectable (one-time use) or retractable (reusable) systems [25]. Schenk et al. [210] highlighted that efficient packing schemes are necessary to ensure reliable deployment of inflatables. The choice of material for an inflatable largely depends on the application. Everyday balloons tend to be made from latex rubber, polychloroprene or nylon, due to their low cost and ease of manufacture, and have often been adopted by HCI researchers in prototype development. Inflatable structures for space tend to consist of a combination of more costly, high strength-to-weight, durable fabrics such as Kevlar ® and Vectran ™ (a high-strength, liquid crystal polymer) with a polyimide (Kapton ® ) or polyurethane membrane material that acts as a sealant [220]. Car airbags are typically formed from thin woven nylon or polyester fabric [108], while polyethylene is often found in cushioning and packaging. Silicone has also been widely used in medicine [123] and in soft robotics for artificial muscles [115,140]. Ultimately, the strength and stiffness of the inflatable structure is controlled by the internal pressure and the elastic modulus of the restraint material [26]. Inflation pressure provides structural rigidity by placing tension in the walls of the structure. Therefore, to maintain the inflated shape, the internal pressure must equal or exceed any external pressure that is applied to the structure. The larger the structure, the lower the inflation pressure that is generally required. After a period of time, the inflation gas will inevitably escape through imperfections in the inflatable skin that may have occurred through manufacture, folding or deployment, reducing the overall shape and stiffness. Furthermore, inflated structures that lack reinforcement are more susceptible to puncture [210]. As a result, the concept of rigidisable materials, i.e. "materials that are initially flexible to facilitate inflation or deployment and become rigid when exposed to an external influence", has been introduced by Cadogan et al. [24], with an extensive review of the different methods given in [210]. Inflatable Structures within HCI An advantage of inflatable structures within HCI research is that they can be deflated, partially inflated or fully inflated, enabling a wide range of stiffness properties or actuation forces to be achieved. This has attracted computer scientists in recent years to adopt these structures for wearable technologies and to provide interactive haptic feedback to users. Examples include interactive shoes that adapt to different surfaces or to the user's foot morphology [11], therapeutic cushions that can adjust according to the user [263], and in inflatable pads fitted to car steering wheels that can pulsate and alert drivers to potential problems without utilising their vision or auditory senses that may already be fully engaged [56]. Inflatable materials have also found their way into interactive input and display devices including an inflatable mouse [114], inflatable buttons and controls [82,243], and in an inflatable multi-touch display surface that could dynamically deform from a flat, circular display to a convex or concave, hemispherical display according to the context of the user's task [225]. This highlights that inflatable structures for shape-change have already been well adopted within HCI as, for example, balloons are very easy to obtain at low cost and require little to no expertise or equipment. However, challenges often exist in ensuring reliable and predictable deployment and sufficient structural robustness after deployment, potentially limiting their use in more demanding environments [200,210]. By incorporating more robust materials or combining membrane materials with durable fabrics, in a similar manner to the aerospace or automotive industries, prototype development may be accelerated closer towards its end-use. Variable Stiffness Materials In this section, we discuss how shape-change can be achieved by designing a structure so that the stiffness properties vary in different directions, focusing on two mechanisms in particular: anisotropy and multi-stability. Anisotropic Structures Anisotropy, as opposed to isotropy, refers to the directional dependance of material properties. By designing an object such that the stiffness varies along different axes, it can be deformed in a direction with minimum actuation force [236]. The tailoring of stiffness is not a new idea. Bone, for example, has different elastic properties in two orthogonally opposed directions (known as orthotropic); parallel to and normal to the long axis of the structure [19]. Wood is an orthotropic structure as it has different properties in three perpendicular directions; axial, radial, and circumferential, due to its grainlike structure [77]. In contrast, metals and glass are isotropic and have the same macroscale properties in all directions. Examples of stiffness tailoring for shape-change can largely be found in the development of morphing technology for aircraft [121,237], such as bend-twist coupling of beams for gust alleviation [71], and in wind turbine blades [47,124]. In fibrereinforced composites, anisotropy is introduced through the distribution of material through-thickness and through fibre orientation [121]. By reducing stiffness in the chord direction and increasing in the span direction, researchers have aimed to create morphing aircraft skins that have sufficient stiffness to withstand aerodynamic loads, yet are flexible enough for actuation [238]. For example, Peel et al. [175] developed a manufacturing process for fibre-reinforced elastomeric composites using an elastomer matrix, such as urethane or silicone, with glass fibre reinforcement, and showed notable variation in the resulting elastic modulus in different directions [174]. Although in material science, anisotropy is typically linked to the material's microstructure, it may also be introduced via its structure, such as through corrugation, which is perhaps more accessible to HCI research than high cost composites. Corrugated structures have long been found in the packaging industry (e.g. cardboard) [4,17,18], in civil infrastructures (e.g. in roofs, walls and pipes), [104,240], in aerospace structures (e.g. the Junkers Ju-52 of the 1930's) [106,232,265,266] and in sandwich panels for marine and aerospace applications [48,86,117,194]. This is due to their high strength-to-weight ratio, energy absorption capabilities and anisotropic behaviour, which can be attributed to the high degree of stiffness transverse to the corrugation direction, in comparison to along the corrugation direction. By specifying the dimensions and materials of both the face sheets and corrugated core, a range of structural characteristics for morphing can be achieved [266]. For example, Norman et al. [164] explored the use of curved corrugated shells for structural morphing and determined that, although there is a loss in membrane (i.e. in-plane) stiffness, the sheets are capable of large changes in Gaussian curvature ( Figure 5). Bi-directional cores have also been developed to modify the stiffness in both directions [130,214]. Anisotropic Structures within HCI The concept of material anisotropy for shape-change has been investigated to some extent by HCI researchers by using particle jamming to achieve variable stiffness properties, such as in [1,67], where the authors focused on alternating between soft and hard deformability. However, Ou et al. [171] showed that it was also possible to introduce anisotropic deformation using jamming through the structural design of the jammable materials, such as by weaving multiple jamming units into the material, using interleaving flaps in the elastic air bladder, or by introducing crease patterns or cutting geometrical patterns into the jamming flaps [169]. They envisioned that by incorporating more complex weaving patterns or increasing the resolution, it may be possible to program more sophisticated deformation interactions such as the direction of stretching, degree of rolling, bending angle and shear deformation. The developments in 3D printing have also made shape-change via anisotropy more accessible. Due to the layer-by-layer deposition, 3D printing is inherently anisotropic, resulting in objects that are much stronger in the horizontal printing direction than the vertical direction [186]. Typically, this is an unwanted characteristic, particularly for brittle materials, however, this has enabled the printing of hydrogel architectures with localised, anisotropic swelling behaviour that can result in complex, three-dimensional shape-change when immersed in water [73]. As in the case of auxetics, 3D printing has also encouraged cellular materials to be designed that can exhibit different deformation characteristics in different axis [41]. One of the main challenges in implementing anisotropy is that this type of shape-change, such as that exhibited in morphing aircraft wings and wind turbine blades, tends to be very subtle and may not be suitable for achieving large, nonlinear changes in shape that can be accomplished using deployable structures, elastomers and multi-stable structures [236]. Multi-stable Structures Multi-stable structures undergo large, rapid deformations between multiple stable mechanical shapes. The Venus flytrap is a well-known bi-stable structure that snaps from an open to closed state when small hairs on the plant are triggered by potential prey [68]. Another example is the slap bracelet that consists of layered, flexible, bistable spring bands sealed within a fabric, silicone or plastic cover. By straightening out the bracelet, tension is applied to the springs which is released when slapped against the wearer's arm, causing the bands to spring back and wrap around the wrist [120]. Self-retracting tape measures also undergo large rapid deformations between different states. During manufacture, a concavo-convex crosssection is introduced through heat treatment that gives the tape longitudinal structural rigidity when deployed and enables rapid retraction when bent [180]. Harnessing such characteristics may enable HCI researchers to achieve rapid actuation between multiple device shapes. Extensive research into multi-stability has been conducted within composite materials [50,125,144,145,183,184]. The snap-through phenomenon occurs when a structure is forced to transition from one equilibrium, which is stable under small perturbations, to another (usually by an external force), by transitioning through a region of instability. This region of instability, or negative stiffness, means that significant deformation is required to move between the two stable states and explains why bistability is so attractive for morphing applica-tions [8,45]. The snapping of thin composite laminates occurs due to residual stresses that are generated during the cure cycle of an asymmetric lay-up (due to a mismatch in CTE of the constituent layers) or as a result of initial curvature. The tristability of composite shells has also been reported in [37,244]. Typically, only low actuation forces are required to generate large deformations in multi-stable composites. Various actuation mechanisms to provide this 'snap-through' shape-change for morphing applications have been investigated, including piezoelectric ceramic-based actuators [20], shape memory alloys [43,113] and thermal patches [135], and a comparison between them was made in [21]. A passively adaptive structure that does not rely on external mechanisms for actuation was also reported in [8]. In [163] the authors developed multistable corrugated panels from a Copper-Beryllium alloy and suggested that these structures could potentially be used as a mount for flexible displays or deployable electronic devices. Shan et al. [215] demonstrated the accessibility of fabricating bistable structures by 3D printing multi-stable, architected materials. When compressed, the internal beam elements move into another stable state but with higher energy, exhibiting local, bistable deformation. The beams can then return to their initial configuration when a sufficient reverse force is applied, enabling multiple changes in shape. In [189] the authors created a bistable metamaterial that moves through several metastable states via snap-through buckling when stretched, until it reaches full extension, achieving strains of up to 150%. Multi-stability has also be combined with deployable shapechange such as folding. The Buckliball [217] is a good example of this and is a continuum silicone shell structure developed by Shim et al. that undergoes a structural transformation when the internal pressure is reduced. A pattern of circular voids exists on the shell so that when the internal pressure falls below a critical value, the narrow pieces of material between the voids collapse inwards causing buckling of the ball. In [193] the author discusses in more detail how buckling of slender structures can be exploited to develop functional mechanisms for smart morphable surfaces. As another example, Daynes et al. [44] developed a bistable, elastomeric origami morphing structure, made from silicone, with locally reinforced regions of acrylonitrile butadiene styrene (ABS). The structure could deploy from a flat to textured arrangement ( Figure 6), under pneumatic actuation, and maintain its shape without sustained actuation. They highlighted that this cellular structure is a lightweight, inexpensive method of creating an actuating, shape-changing mechanism due to the low cost of materials and the use of 3D printing to fabricate the mould. However, the authors noted that the more compliant the structure is, the less able it is to support high external forces [46]. Multi-stable Structures within HCI In HCI we are used to computers reacting at a fast rate but in shape-changing interfaces this requirement is more difficult to fulfil as some shape-changing mechanisms, such as SMAs and SMPs, have slow actuation times that may affect the user experience [231,255]. In contrast, bistable materials typically have actuation times of a few milliseconds [21]. Although multistability is normally associated with composite materials, we have shown that this type of shape-change can be exploited using simple and inexpensive materials and methods. The main challenge of implementing multi-stability within HCI is that these structures are typically binary, i.e. they are limited to two stable shape configurations (bistable), and the shapechange is challenging to control [42]. As multi-stability is a non-linear phenomenon, it is often not intuitive how additional stable configurations can be achieved by tailoring the underlying mechanics. Furthermore, the greater the number of stable configurations, the harder it is to control the actuation dynamics due to possible nonlinear interactions between modes [64]. Consideration must also be made as to how these structures will be actuated. Nonetheless, multi-stability may provide a mechanism to achieve rapid actuation and shape-change that has not previously been reported within HCI literature. Shape Memory Materials Shape memory materials are a class of material that exhibit a shape memory effect (SME) due to their ability to change stiffness as a result of an externally applied stimuli. Here we discuss the two most common forms of shape memory materials: shape memory alloys (SMAs) and shape memory polymers (SMPs). Shape Memory Alloys The shape-changing mechanism behind SMAs is based on a reversible martensitic transformation. When the SMA is cooled it undergoes a martensitic transformation from its austenite phase to its twinned martensite phase, at which point the material is malleable and may be reconfigured into the desired shape (a deformed temporary martensite phase). When heated above the austenite finish temperature the material undergoes a reversible martensitic transformation, returning to its original, rigid shape in the austenite phase. This is known as a one-way SME. It is also worth noting that SMAs exhibit a temperature hysteresis (Figure 7). In other words, the temperature required for the martensite to austenite transformation is higher than for the austenite to martensite transformation [236]. Materials with a two-way SME are able to remember their shape at both low and high temperatures and switch between them. To achieve this, the material must be subjected to repeated one-way SME cycles [134]. These training requirements, in addition to the limits in maximum recoverable strain and the tendency of the shape recovery to deteriorate at high temperature over time, means that one-way SMEs are generally favourable due to their greater reliability [227]. A third characteristic of SMAs is pseudoelasticity. This means that the SMA can transform between the austenite and martensite phase without a change in temperature when a mechanical force is applied [102]. To date, SMAs have been developed into many shapes including solid (such as wires), film and even foam. Commercially available SMAs are typically based on one of three alloys: NiTi-based (Nitinol), Cu-based and Fe-based, the choice depending on the application. For example, NiTi SMAs exhibit very high performance, are highly reliable and have frequently been implemented in the development of HCI prototypes. Febased alloys have excellent biocompatibility and are often found in biomedical applications. Cu-based alloys are low cost but relatively weak and are generally only used for one-time actuation [227]. All these materials are thermo-responsive, therefore relying on heat to trigger the SME, however, recent advances have also been made in developing magnetoresponsive SMAs based on ferromagnetic materials [94]. Shape Memory Alloys within HCI The most useful applications of SMAs within HCI are as an actuating mechanism for shape-change. SMAs have several advantages over motors, pneumatics and hydraulic systems such as low cost, reduced size and complexity, their ability to react directly to environmental stimuli such as temperature, their biocompatibility and low weight [157]. Furthermore, they are capable of actuating in 3D, thereby enabling the evolution of structures and devices that can extend, bend and twist [102]. They have also been widely adopted within HCI prototypes. For example, SMAs have been used to animate paper [185,187], to create novel, deformable user interfaces and displays [75,155,172,198] and in the development of shape-changing and texturally-rich surfaces [38,39,168]. Although SMAs have been widely used within HCI research, they are not suitable for every application. They have a relatively small usable strain and are not easy to control. They have a low actuation frequency, low accuracy, are not very energy efficient and have a demanding training regime [102]. In addition, rapid heating of SMAs is challenging and although this can be achieved by Joule heating (i.e. applying an electrical current), care has to be taken not to overheat and damage the elements or harm the user [61,188]. Furthermore, the cooling process is also slow, highlighting the relatively slow actuation response time of SMAs, which is also influenced by the size and shape of the SMA (i.e. larger and thicker SMAs take longer to heat and cool) [5,231]. This is likely to have a significant impact on the user experience and it may be necessary to look to alternative mechanisms, such as multi-stability, to achieve rapid actuation between different shape states. Shape Memory Polymers SMPs also exhibit an SME and have several advantages over SMAs. They are lighter, lower in cost (both in the raw material and processing), easier to process into almost any shape and their material properties can be more readily manipulated [256,257]. Furthermore, SMPs have a greater recoverable strain (up to 1100%) than SMAs and the shape memory process can be triggered by a wide range of stimuli including UV [131], moisture [258], heat [138] or several stimuli combined [95]. The SME observed in SMPs is largely due to their molecular structure and the processing and programming conditions. Examples include segmented polyurethane, styrene-based polymers and crosslinked polyethylene [92]. These polymers undergo a transformation to a more deformable state when heated above their transition temperature, which may be its T g (see elastomers) or its melting temperature. On cooling, the polymer hardens into the deformed shape and maintains this shape even when the force is removed. If the polymer is heated back above its transition temperature, it will undergo a shape memory recovery process and return to its programmed shape [236]. This is unlike SMAs where the stiffness is reduced when the temperature is lowered. Due to the wide shape recovery temperature range of SMPs, it is possible to have more than one memorised shape, achieved either by triggering different stimuli or through multiple transitions within different temperature ranges [13,15,254]. In depth reviews into the developments of SMPs can be found in [91,92,138]. Shape Memory Polymers within HCI Both within the HCI and materials communities, SMAs have been more widely utilised than SMPs for actuating shapechange. The lack of understanding in the behaviour of SMPs, particularly regarding their long-term use, means that few are commercially available and as a result, they have not been widely implemented [227]. Like SMAs, the actuation response of these materials is relatively slow [255]. However, the field has seen recent growth and researchers have begun investigating their potential use in a wide variety of applications including self-tightening sutures [132], biodegradable stents [250], surface patterning (e.g. braille) [93], morphing wings [107,178], deployable structures [222] and self-healing materials [253], due to their highly tailorable structure. BRIDGING THE GAP As seen in Table 1, some shape-changing technologies have been more widely adopted within HCI than others. For example, foldable and inflatable structures and SMAs have been used in a variety of applications, ranging from interactive displays and input/output devices to clothes and furniture. However, little research has been conducted using rollable and multi-stable structures to enact shape-change within HCI. We argue that this can be attributed to two key factors: (1) a lack of awareness and understanding of shape-changing technologies and their material characteristics by HCI researchers, and (2) a lack of availability of equipment, materials and lab space for HCI researchers to support their work. SMAs, for example, are readily available online and require limited expertise. In contrast, the complex mathematics behind auxetic materials can make their fabrication challenging. Multi-stability and anisotropy have not been widely adopted within HCI, potentially due to an absence in appreciation of these mechanisms and the methods required to implement them. These are also relatively new fields in material science and work is still required by the materials community to mature the technology. Although material science, like HCI, is multi-disciplinary and benefits from collaboration between physicists, chemists and engineers, the approaches to research also have some key differences. For example, manufacturability is a key consideration in the design and development phases in material science and projects often go through many development cycles (years, or even decades) before they are physically realised. In contrast, HCI researchers often develop proof of concept prototypes in much shorter time frames, with attention being paid to the expressivity and ease of appropriation of the technology. This difference in requirements and approach to research is perhaps another key factor in the gap between these two fields. The choice of shape-changing mechanism is also largely dependant on factors such as performance (e.g. strength and robustness), power consumption, actuation capabilities (e.g. maximum displacement and speed) and shape-change resolution (e.g. granularity, curvature, strength and speed), and this review highlights that the different technologies vary in each of these criteria. Stretchable and deployable structures tend to be limited to applications that do not require significant strength or robustness, however, they are able to achieve a higher number of shape configurations and larger changes in area and/or volume. Variable stiffness materials are capable of more robust shape-change that can be used for more demanding applications, however, the number of shape configurations and degree of shape-change tends to be limited. Actuation speed and power consumption also differ between each of the technologies. For example, SMAs have a slow response time, are only capable of providing a low actuation force and require a high power consumption, however, many shape configurations can be achieved. In contrast, multi-stable structures deform at a rapid rate and require little force to do so, however, they are limited in the number of stable mechanical shapes they can morph into. Each of these criteria are essential to both material science and HCI applications, highlighting how these fields can benefit from working together. To improve the synergy between these two fields and to foster future collaboration, we propose the following way forward: • The creation of a language and/or common syntax between the fields to address the gap in understanding terminologies and methods. This paper provides a starting point for addressing this issue by defining material properties that HCI researchers may be unfamiliar with. • The creation of online platforms where researchers within HCI can express what is needed in terms of hardware manufacturing, as well as share current developments. As a first step in this direction we placed the content of this paper at www.morphui.com and will update it regularly. • The creation of software design tools to increase the accessibility of material outputs that do not require an in depth understanding of the science behind such developments. Ideally this would enable HCI researchers to have a platform which provides them with an awareness of the behaviour and characteristics of these materials/mechanisms. • The creation of hardware design tools to enable the use of printers, Fab Labs etc., for reproducing material science outputs in a more accessible manner. The capabilities of layering and arranging material, brought on by the additive manufacturing (3D printing) revolution, has sparked interest in compliant, shape-adaptive and multifunctional systems in both HCI and material science. The first-hand experience of writing this review suggests that the HCI community may benefit from a broader awareness of state-of-the-art material systems, whereas the material science community has little experience in embedding information processing, i.e. "intelligence", into structures. Perhaps, the greatest means of fostering synergies between these two fields is taking the leap and interacting with colleagues across discipline boundaries just for curiosity's sake. CONCLUSION With the aim of accelerating the design of shape-changing devices, we have provided a review of the advances in material science from an HCI perspective. We see this approach as a road map for next generation designers that want to better understand material science and adopt shape-changing mechanisms in their work. We also believe that creating OUIs requires a redefinition of the tools we use during the design process. The tools needed for shaped-interface design need to be more expressive, like the raw and versatile materials an industrial designer might use to create complex geometries. A change like this can happen if HCI practitioners are attentive to shape-change developments from a material science perspective. This work is a step in this direction as it bridges a gap between material science, HCI and shape-change.
2018-04-24T17:33:33.767Z
2018-04-21T00:00:00.000
{ "year": 2018, "sha1": "012f534785bac4c0bd9436fbc2db4e249422a828", "oa_license": "CCBY", "oa_url": "https://research-information.bris.ac.uk/ws/files/146384280/PURE_upload_CHI.pdf", "oa_status": "GREEN", "pdf_src": "ACM", "pdf_hash": "012f534785bac4c0bd9436fbc2db4e249422a828", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
5222828
pes2o/s2orc
v3-fos-license
In vitro studies of the toxic effects of silver nanoparticles on HeLa and U937 cells In the last decade, much attention has been paid to studies of the effect of silver nanoparticles (Ag NPs) on tumor cells. Apart from elucidation of the mechanism of NPs’ interaction with mammalian cells, these studies are aimed at discovering new effective antitumor drugs. In this work, we report about the toxic effects of Ag NPs observed on two types of tumor cells: HeLa (adhesive cells) and U937 (suspension cells). The Ag NPs were obtained by an original method of biochemical synthesis. Particle size was 13.2±4.72 nm, and zeta potential was −61.9±3.2 mV. The toxicity of Ag NPs in the concentration range 0.5–8.0 μg Ag/mL was determined by means of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay and cytofluorometry after 4 and 24 hours’ incubation. It was found that Ag NPs had high toxicity toward both cell types. The minimal concentrations where a toxicity effect was registered (toxicity thresholds) lied in the range 0.5–2.0 μg Ag/mL. In parallel with the Ag NP solution, cells were incubated with water solutions of the NP stabilizer (aerosol-OT) and Ag+ ions (as silver nitrate). It was shown that aerosol-OT had no effect on the viability on HeLa cells, but was moderately toxic toward U937, though less dangerous for these cells than Ag NPs. With Ag+ ions, for HeLa no toxic effect was observed, while for U937 they were as toxic as the Ag NPs. The data obtained indicate that Ag NPs as used in this study may prove to be useful for the creation of medicines for cancer therapy. Introduction In the last decade, much attention has been paid to studies of biological (toxic) effects of silver nanoparticles (Ag NPs). The main reason is that these NPs demonstrate strong bactericidal activity, both in the form of solutions and as components of nanocomposite materials. [1][2][3][4][5] A wide spectrum of pathogen microbes affected, in combination with the relatively simple and cheap technology of production, are responsible for the fact that Ag NPs have found a lot of applications in the production of consumer products and materials for medical purposes. [6][7][8] Such an intensive usage of NPs raised the question of their toxicity for humans and the environment. [9][10][11] Therefore, the problem arose of determination of conditions for safe application of Ag NPs, so as to make their valuable advantages much more significant than their negative effects. Works in this direction belong to the field of nanotoxicology, the newly formed branch of toxicological studies, dealing with the toxic effects of NPs conditioned by their specific physicochemical, optical, and mechanical properties. 10,[12][13][14] The main objects used in in vitro studies of Ag NP action are microbial species and cultured normal or tumor mammalian cells; the results obtained are summarized in recent books and reviews. 2,[15][16][17][18][19] With normal cells, two main aims are pursued by researchers. First, to evaluate the extent of the NPs' cytotoxicity, in particular to elucidate whether the NPs' bactericidal concentration found in experiments on bacteria is toxic for animal or human cells, and hence whether the given application of NPs is dangerous for humans. Second, it is important to define the mechanism of NPs' action, including the influence of their main parameters (size, form, surface charge, and stabilizing shell) on cell viability and functions, as well as to provide visualization of the ways of penetration of NPs into the cell interior. Attention has also been paid to the role of silver ions in the toxicity of NPs. 20,21 With tumor cells, apart from elucidation of the mechanism of cytotoxicity, the purpose is to obtain data allowing the application of Ag NPs in cancer therapy. 22,23 Analysis of the data available reveals problems arising mainly from the underestimation of the significance of the way applied for NPs preparation, which determines their properties and behavior in experimental conditions. The importance of the method used for the synthesis of metal NPs was emphasized in our papers dealing with the studies on biological effects of Ag NPs, 1,24 and more recently in a monograph devoted to the same effects of metal NPs. 2 For example, there are few reports of studies of both antimicrobial activity and cytotoxicity on the same NP preparation. 22,25 In the other publications available, comparison of the working bactericidal concentrations of Ag NPs with those provoking cell-toxic effects was made with different NP preparations. 21,26,27 As shown in our book, 2 such a comparison cannot be made correctly, since differently prepared NPs have different parameters affecting their biological activity. In our researches, we used Ag NPs obtained by an original method of biochemical synthesis in studies of their action on various biological objects, including bacteria, viruses, 1,28,29 uncellular slim mold, 30,31 unicellular alga, and plant seeds. 32 In the course of these studies, we worked out the requirements for NP preparation and experimental procedure necessary for revealing the pure effects of NPs on the object under consideration. In this paper, we report our first results obtained in studies of Ag NPs' effect on the viability of human tumor cells. Our aim was to determine the influence of NP concentration and time of incubation on the percentage of living cells, and to reveal the corresponding changes in their state (apoptosis and necrosis). cell lines A HeLa adherent cell line was obtained from the American Type Culture Collection (ATCC; Manassas, VA, USA). Cells were maintained in DMEM supplemented with 10% (v/v) BSA in a 95% (v/v) humidified atmosphere and 5% (v/v) CO 2 at 37°C. Cells were seeded near confluence a day before incubation with NPs. A U937 suspended cell line (ATCC CRL 1593, DSM ACC 5, and ECACC 85011440 collections) was obtained from the Russian Cell Culture Collection (Institute of Cytology of Russian Academy of Sciences, St Petersburg, Russia). Cells were maintained in RPMI 1640 medium supplemented with 10% (v/v) BSA in a 95% (v/v) humidified atmosphere and 5% (v/v) CO 2 at 37°C. Optimal cell-seeding density was 2×10 5 -9×10 5 cells/mL. Cells were seeded a day before incubation with NPs. Silver nanoparticle synthesis and characterization Ag NPs were obtained first in micellar solution (in organic solvent) by means of an original method (biochemical synthesis) based on the reduction of silver ions in reverse micelles by the natural flavonoid quercetin (Qr). The reverse micelles are formed from AOT, a synthetic anionic surface-active substance widely used for the creation of reverse micellar solutions, including those used for the synthesis of metal NPs. 33,34 The proper choice of system composition (concentrations of reagents, hydration extent, metal salt, solvent) allows the provision of a sufficiently high rate of synthesis, stability, Nanotechnology, Science and Applications 2015:8 submit your manuscript | www.dovepress.com 21 Ag NP effects on hela and U937 cells and yield of NPs. Principles of the method and details of Ag NP preparation are given elsewhere. 1,24,35 Briefly, the Qr micellar solution in AOT/isooctane is first prepared according to a previously described procedure. 35,36 The Qr concentration (C Qr ) in micellar solution is determined from spectrophotometric measurements by using the extinction coefficient (ε = 1.8×10 4 L/mol ⋅ cm), as previously described. 36 Then, metal salt is added as water solution to the concentration and hydration extent desired; in this work, water Ag(NH 3 ) 2 NO 3 solution (prepared by the addition of ammonium hydroxide to silver nitrate solution) was introduced into the Qr micellar solution to a silver salt concentration (C Ag ) of 1 mM and hydration extent of w = [H2O]/[AOT] =3.7. After being shaken for several minutes, almost colorless Qr solution acquired the intense red-brown coloration that indicated the appearance of NPs. Measurements of optical absorption spectra showed that the NP formation took no more than 2 days, as seen from the absence of further changes of the characteristic absorption band (435 nm). The Ag NP concentration (C 0 NP ) in micellar solution was obtained from the measured optical density in the absorption-band maximum at this stationary stage and the extinction coefficient (ε =1.03×10 4 L/mol ⋅ cm) as previously described. 36 The yield of NPs (β NP = [C 0 NP /C Ag ] ×100%) as a function of C Qr /C Ag allows the determination of the region of this relation where the yield is constant and equal to 100%, corresponding to the complete reduction of metal ions. 2,36 For the parameters used in this work, the Ag NP concentration was determined 3 days after the beginning of synthesis: C 0 NP ≈1 mM, ie, β NP ≈100%. The water solution of the Ag NPs is obtained by their transfer from the micellar solution into the water phase by means of a specially developed procedure. 1,37 In water solution, the NPs are stabilized by the AOT bilayer shell bearing negative surface charge. The Ag NP concentration in water solution is determined from the optical density of the corresponding absorption band using the extinction coefficient (ε = 1.14×10 4 L/ mol ⋅ cm) obtained in independent measurements; details may be found in our previous paper and in a recent book. 2,37 The concentration of stabilizer (AOT) is found by means of the standard procedure used for the determination of anionic surface-active substances in water. 38 In the water solution used in this work (referred to as "initial solution"), the Ag NP concentration (equivalent to metallic silver) was 1 mM (108 µg/mL) and AOT concentration was 2 mM. Size distribution and zeta potential of the NPs in water solution were found by photon correlation spectroscopy (PCS) on ZetaPALS (Brookhaven Instruments, Holtsville, NY, USA). Morphology, structure, and size distribution were determined by transmission electron microscopy (TEM) on a Leo 912 AB Omega microscope (Carl Zeiss, Oberkochen, Germany). Samples for microscopy were prepared by placing 3 µL of the Ag NP water solution on the formvar-coated copper grid, with subsequent drying for 30 minutes on air. From the data given by electron microscopy, a histogram was created for 519 particles. cell incubation with silver nanoparticles Cells were grown in appropriate plates (96-or 12-well) depending on assay type and under the conditions described in the "Cell lines" section. Then, aliquots of initial Ag NP solution were added to cell medium to final NP concentrations of 0.5, 1.0, 2.0, 4.0, and 8.0 µg/mL. After that, the cell medium was gently mixed by pipetting in each well. Cells were incubated for 4 and 24 hours in 95% (v/v) humidified atmosphere and 5% (v/v) CO 2 at 37°C. The effect of NPs was also compared with that of silver ions or stabilizer (AOT); for this purpose, cells were incubated with either silver nitrate or AOT water solution added to the final concentrations of metallic silver or AOT introduced at the corresponding dilutions of the initial Ag NP solution (see also the "Estimation of cell viability by MTT assay" section). Incubation with deionized water was used as a negative control. MTT viability assay Cells were grown in 96-well plates in 200 µL of medium volume per well and incubated with NPs, AOT, or Ag + ions, as mentioned in the previous section. After treatment, the medium was removed and cells were washed twice with PBS (pH 7.4) at 37°C and placed into PBS. Then, the MTT solution was added to the cells to a concentration of 1 mg/mL, and cells were incubated for 3 hours in standard conditions. After incubation, PBS was removed and DMSO was added to dissolve formed formazan crystals. The optical density of formazan solution in DMSO was measured by a Chameleon V microplate reader (Hidex, Turku, Finland) at 540 nm. Cell viability for each concentration point was calculated as the ratio of the mean optical density of replicated wells relative to that of the negative control. Measurement of apoptotic levels by flow cytofluorometry To detect apoptotic events, annexin V conjugated with fluorescein isothiocyanate (FITC) and propidium iodide (PI) Nanotechnology, Science and Applications 2015:8 submit your manuscript | www.dovepress.com 22 Kaba and egorova were used. The procedure was performed using the Annexin V-FITC Apoptosis Detection Kit I and according to the manufacturer's instructions. Cells were grown in 12-well plates in 1 mL of medium volume per well and then incubated with NPs, AOT, or Ag + ions. After being treated with NPs, cells were washed in cold PBS, then resuspended in annexin V binding buffer; after that, annexin V and PI were added. After being incubated for 15 minutes, stained cells were diluted with binding buffer, placed into an ice bath, and then analyzed using a FACSCalibur flow cytometer (BD Biosciences). Fluorescence levels of FITC and PI were measured. Statistical analysis All experiments were performed in triplicate, and all data are presented as means ± standard deviation. Statistical significance was determined by Student's t-test. Silver nanoparticles Typical electron micrograph and diffraction pattern of Ag NPs in the initial water solution are shown in Figure 1. The NPs were approximately spherical ( Figure 1A); electron diffraction revealed a face-centered cubic crystal structure with parameters corresponding to those of Ag crystal ( Figure 1B). Particle-size distribution is given in Figure 2A. Gauss approximation (for 90% of the particles) gave a diameter of 13.4±4.7 nm. Mean size measurement by PCS resulted in 32.0±0.6 nm ( Figure 2B), ie, the mean diameter was larger than that found from TEM. The enhancement of particle sizes measured by PCS for metal (at least silver and gold) NPs compared to those given by TEM has been reported in the literature 39 as well as in our previous publications. 1,40 The zeta potential of Ag NPs was found to be -61.9±3.2 mV; therefore, the particles were negatively charged as could be expected for their stabilizing shell from AOT molecules with anionic ionizable groups. estimation of cell viability by MTT assay The cytotoxicity of the NPs was characterized by determination of 1) cell viability and 2) cell death levels (early apoptosis, late apoptosis, and necrosis). Cell viability was measured by the well-known MTT assay, based on evaluation of the activity of mitochondrial dehydrogenases. When incubated with cells studied, MTT as substrate is reduced by these enzymes to formazan dye, and then its concentration in DMSO solution is found from the optical density of the relevant absorption band. 41,42 MTT assays were performed after 4 and 24 hours' incubation with various NP concentrations (see the "Cell incubation with silver nanoparticles" section) introduced as the corresponding dilutions of the initial solution of Ag NPs. To check the influence of the stabilizer (AOT) and to throw light also on the role of silver ions in the toxic effect of Ag NPs, in parallel with the NP solution the cells were incubated with water solutions of either AOT or silver nitrate. Here, the AOT concentrations were equal to those introduced with the NP solution (as the corresponding dilutions of 2 mM AOT solution [see the "Silver nanoparticle synthesis and characterization" section]), and silver ion concentrations were equal to metallic silver concentrations introduced with Ag NPs (as if all the NPs would dissociate into ions). The percentages of active cells (marked as cell viability) as a function of silver (Ag NPs or Ag + ions) concentration, together 23 Ag NP effects on hela and U937 cells with the corresponding results for AOT, are presented in Figure 3. As suggested by us earlier in analysis of the antimicrobial effects of Ag NPs, 2 the NP concentration corresponding to the beginning of viability decrease could be regarded as the toxicity threshold (TT), which can be used as a characteristic of the toxicity of metal NPs. This criterion is applied also in this work for estimation of Ag NP toxic action toward cultured cells. For HeLa cells ( Figure 3A and B) viability decreased from 2.0 µg/mL Ag NPs, so the TT values were equal to 2.0 µg/mL for both incubation times. Also, it was clear that both AOT and Ag + ions did not exert a noticeable influence on cell viability in the whole concentration range studied. Interestingly, here an increase in viability was observed at the lowest AOT and (at 24 hours' incubation) Ag NP concentrations, indicative probably of a stimulating action of these agents in the nontoxic range below the TT value. For U937 cells ( Figure 3C and D), a decrease in cell viability was also observed at both incubation times. A significant toxic effect was detected for Ag NPs and Ag + ions, while the stabilizer (AOT) was less dangerous for cells in almost the whole concentration range. After 4 hours' incubation, an obvious decrease in cell viability compared to control was registered for Ag NPs and Ag + ions at 2.0 µg/mL and 1 µg/mL, respectively. That is, the TT value for the NPs was twice as large as that for silver ions; ie, silver ions were found to be more toxic than the NPs. From 2.0 µg/mL up to 8.0 µg/mL, both the Ag NPs and Ag + ions demonstrated an almost equal degree of toxicity. After 24 hours' incubation, for both agents a toxic effect was revealed from the smallest concentration studied (0.5 µg/mL), ie, the TT value was less than that found after 4 hours' incubation, and equal for NPs and silver ions. The toxic effect of AOT here was more pronounced, though obviously less than that of silver for all the concentrations, except for 8.0 µg/mL. It is worth noting that at both incubation times, cell viability at 8.0 µg/mL Ag NPs was somewhat higher than that at 4.0 µg/mL. To our view, this may be connected with the NP aggregation and sedimentation in the cell medium leading to the decrease in concentration of biologically active particles. As follows from the comparison of the data obtained for the two cell lines, HeLa cells appeared to be more resistant to the toxic action of Ag + ions than U937 cells. The reason lies probably in the different nature of the cells: for the adhesive cells (HeLa), part of their surface (adherent to the well surface) is less available for silver ions, while for suspension 24 Kaba and egorova cells (U937), their whole surface is readily available. Also, the difference in cell size may play a role: since U937 are smaller, for an equal cell number in unit volume of suspension, the total surface area (ie, surface of contact) of U937 is larger than that of HeLa, and hence the effect of Ag + ions may be more pronounced. Detection of early and late apoptotic and necrotic events In this study, differentiation between early and late apoptosis was performed. FITC-labeled annexin was used to determine cells at early apoptosis because of its capability to bind with phosphatidyl serine on the surface of apoptotic cells (in the presence of Ca 2+ ions). Additional cell staining with PI allows for distinguishing between late apoptotic and necrotic cells during analysis. 43 Therefore, it was possible to distinguish cells at various apoptosis stages from those in a necrotic state. Results were obtained as events registered in the FL1 (FITC) and FL2 (PI) channels of the cytofluorometer. Percentages of apoptotic and necrotic events were calculated after making FL1/FL2 dot-plot diagrams (data not shown). The data obtained from these diagrams were used to represent the relation between various events for each Ag NP concentration (Figure 4). Figure 4A) the percentage of living cells was lower compared to control at all Ag NP concentrations studied; their contribution decreased in the range 0-2.0 µg/mL (except for 0.5 µg/mL) and then increased for 4.0 and 8.0 µg/mL. At 2.0 µg/mL, a noticeable increase in the percent of necrotic cells was observed, which correlated with the significant decrease in cell viability at this NP concentration ( Figure 3A). At the two higher NP concentrations, the contribution of these dead cells was lower, while the percentage of apoptotic cells was higher (at 4 µg/mL) or almost the same as that at 2 µg/mL. After 24 hours' incubation ( Figure 4B) at 0.5-1.0 µg/mL of Ag NPs, the percentage of living cells remained practically equal to that found in the control. At 2.0 µg/mL, an obvious decrease of living cell contribution and increase of that for cells in apoptotic and necrotic states was registered. Both results were in accordance with the corresponding data on viability given by the MTT test ( Figure 3B). Again, at 4.0 and 8.0 µg/mL, the contribution of dead cells was somewhat lower and that of early apoptotic cells higher than at 2.0 µg/mL. For HeLa cells, after 4 hours' incubation ( For U937 cells, after 4 hours' incubation ( Figure 4C) the percentage of living cells tended to decrease monotonously with increase in Ag NP concentration. At the same time, the 25 Ag NP effects on hela and U937 cells percentage of apoptotic cells increased. From 1.0 µg/mL to 4.0 µg/mL, cells at late apoptosis dominated over those at early apoptosis. Contrary to that, at 8.0 µg/mL cells were generally early apoptotic. The contribution of necrotic cells was low at all NP concentrations. After 24 hours' incubation ( Figure 4D), the level of necrotic cells incubated with 0.5-1.0 µg/mL of Ag NPs increased in comparison with cells after 4 hours' incubation. At 0.5 µg/mL, a low percentage of living cells and a high percentage of necrotic cells was observed; this result was unexpected, taking into account the relatively high viability given for this NP concentration by the MTT assay ( Figure 3D). At 4.0 and 8.0 µg/mL NP concentrations, the vast majority of cells were in a late-apoptosis state: no living, but almost no necrotic cells were found. This may have been associated with the aggregation of NPs and the corresponding decrease of the concentration of biologically active NPs in solution, in accordance with the increase of cell viability at 8.0 µg/mL of Ag NPs. Also, this may have resulted from cell proliferation during 24 hours' incubation with 8.0 µg/mL of Ag NPs. To check our supposition of the aggregation of NPs as a possible reason for the enhanced viability of cells at higher concentrations of Ag NPs, we tried first to measure particle sizes by PCS in cell-culture media upon incubation with NPs. However, it turned out that this task could not be realized, because this technique does not reveal the nature of particles, and hence it was not possible to distinguish aggregates of Ag NPs from those formed by various cell-culture components, including combinations with NPs. Therefore, we used the nearest model available and measured particle sizes in PBS (without cells) incubated with 4.0 and 8.0 µg/mL of NPs for 1 and 2 hours at 37°C. It was found that addition of Ag NPs led to the formation of bigger particles during the first hour of incubation, with no changes at longer times. A typical result is shown in Figure 5. After 1 hours' incubation with 4 µg/mL Ag NPs ( Figure 5A), the bimodal size distribution was found, with one peak of low intensity (at 60-100 nm) and one peak of high intensity (at 670-720 nm); this distribution underwent practically no changes in the subsequent 1 or more hours of incubation ( Figure 5B). A similar picture was observed with 8 µg/mL Ag NPs ( Figure 5C and D), but here both peaks shifted to the bigger sizes, with the low-intensity peak at 150-170 nm and the high-intensity peak at 1,500-1,900 nm. Since 1) mono-and bivalent cation concentrations in PBS were the same as in the cell-culture medium and 2) there were no other components in PBS capable of aggregation to the sizes observed except for Ag NPs, one can conclude that aggregation of Ag NPs (and the corresponding decrease in their toxicity) may take place at high Ag NP concentrations in cell culture. Formation of the Ag NP aggregates in cell-culture media with sizes in similar range has also been noted by other authors. 44,45 This is most likely provoked by NP association due to cation adsorption on their negative surface groups. Discussion As seen from the results presented, Ag NPs obtained by biochemical synthesis exhibited strong toxicity toward tumor cells in the concentration range studied. The effect was significant for both cell types, ie, for both adhesive and suspension cells. The TT lay at 0.5-2.0 µg/mL. Comparison of NP toxicity with that of silver ions in equivalent concentrations showed that Ag + ions exerted no influence on the viability of HeLa cells, while for U937 they demonstrated strong toxicity equal to that of NPs. This probably reflects the different origin of the toxic action of Ag NPs: with HeLa, the cell viability is affected by the NPs themselves, but not by the metal ions released from their surface, while with U937 cells, the toxicity of NPs is mediated by the metal ions. Therefore, the toxic effect of Ag NPs may be realized through different mechanisms for the two cell types studied. We found it useful to compare our results on cell viability with those in the literature, in studies of the toxicity of Ag NPs in vitro. Among this pool of data, we considered recent papers dealing with cancer cells where similar assays were applied. The results obtained for adhesive cells after 24 hours of incubation are presented in Table 1. As the main criterion for comparison, a TT value was used that could be easily found from the cell viability versus NP-concentration plots similar to those shown in Figure 3. As seen in Table 1, the TT values obtained for the same cell line differed from each other, either by several times (with HeLa cells) 44,46 or by orders. For example, with A431 (human skin carcinoma) cells, the TT varied from 1.51 µg/mL 22 to .50 µg/mL, 47 and with A549 (human lung carcinoma epithelial-like cells) from 0.5 µg/ mL 48 to .50 µg/mL. 47 Such a discrepancy is unlikely to have resulted from different particle sizes, since it is clear that here the difference in particle sizes did not correlate with their toxicity: the TT values coincided for 50 and 100 nm NPs 44 or differed significantly for particles of similar size range. 21,22,46 It is also worth noting that the data considered do not support the widespread opinion about the higher biological activity of small particles compared to that of bigger ones. For example, the 2-5 nm NPs give a TT value of 60 µg/mL, 46 while with the same cells, for 10-100 nm NPs this value is 10-20 µg/mL. 44 Also, for the other cells, smaller particles appear to be less toxic than bigger ones. 47,48 Comparison with the TT value obtained by us for the adhesive cells after 24 hours' incubation (2 µg/mL) shows that it was significantly smaller than those obtained for the same cells by other authors. 44,46 However, our result lies close to that found by Arora et al 22 27 Ag NP effects on hela and U937 cells bigger NPs. 21,22,44,46 So it seems that at present, particle sizes in the initial NP solution do not play a role in viability, at least for the tumor cells considered here. The most probable reason for this may be particle aggregation in cell-culture medium, reported by various authors 21,45 and observed also in this work. It should be added that as follows both from our results and evidence present in the literature discussed, aggregated NPs remain biologically active toward various objects, including cultured cells. As demonstrated recently by Hackenberg et al 25 on mesenchymal stem cells, the aggregation does not prevent silver particles from penetration into the cell interior, including the nucleus, with related cytotoxic and genotoxic effects. Certain comments are needed about the contribution of silver ions to the toxic effects of NPs. It is obvious that water solutions of silver NPs can contain a certain amount of silver ions, either because of the incomplete reduction in the process of NP synthesis or produced by sonication of the aggregated particles aimed at obtaining the desired size distribution. Since Ag + ions are known to have strong biological activity, it is important to know their concentration and to minimize it or eliminate these ions if possible. Experimental evidence and exhaustive discussion of this point was given in the recent publication by Beer et al, 20 where the toxicity of Ag NPs and Ag + ions was investigated on A549 cell culture. In particular, it was shown by MTT assay that 1) cell viability may be decreased after incubation with supernatant containing silver ions in comparison with a control (H 2 O), and 2) cell viability depends on the content of Ag + ions present in the Ag NP preparation: the bigger the ions content, the smaller the percentage of viable cells. As for our experiments, there was no need to measure the residual Ag + ion concentration, since there were no reasons to suggest the existence of residual silver ions in the Ag NP solution. First, with the biochemical synthesis method used by us, the highly effective reducing agent (Qr) provided practically 100% reduction and thus a 100% yield of NPs (see the "Silver nanoparticle synthesis and characterization" section). This peculiarity of our method is described in detail in our papers and recent book. 2,36,37 Second, we did not use sonication or any other method of artificial decrease of particle sizes. Therefore, there was no danger of Ag + ion release from the NP surface in the process of such pretreatment. Conclusion A water solution of Ag NPs obtained by an original method of biochemical synthesis was applied here for studies of Ag NP cytotoxicity on two types of tumor cells: adhesive (HeLa) and suspension (U937). Cytotoxicity was estimated as change in cell viability by MTT assay, as well as changes in their state (apoptosis and necrosis) by flow cytofluorometry. Minimal Ag NP concentration corresponding to the obvious decrease in cell viability (TT) after 4 hours' incubation was 2.0 µg/mL for both HeLa and U937 cells. After 24 hours' incubation the TT value for HeLa remained the same, while for U937 it became smaller -0.5 µg/mL. Control incubation with the stabilizer (AOT) at its concentration in the NP solution showed that AOT is less toxic than Ag NPs for both cell lines. Comparison with the effect of silver ions in concentrations equivalent to the silver 28 Kaba and egorova concentrations introduced with Ag NPs revealed that Ag + ions exerted no toxic effect on HeLa cells and demonstrated a toxicity practically equal to that of Ag NPs on U937 cells. This difference between the cell lines presumably resulted from the difference in cell-surface area available for silver ions. Studies on the relationship among viable, apoptotic, and necrotic cells at each incubation time showed that after 4 hours' treatment, cell death mediated by apoptosis and necrosis increased in a dose-dependent manner. After 24 hours' treatment, a content of early and late apoptotic and necrotic cells changed: living cells turned early apoptotic (HeLa) or late apoptotic/necrotic (U937). We conclude that HeLa cells were more resistant to apoptosis-mediated death in comparison with U937, but cell viability was strongly decreased in both cell lines. In summary, our results indicate the significant toxicity of Ag NPs obtained by biochemical synthesis toward the HeLa and U937 tumor cells. It follows that these NPs may prove to be a possible candidate for creation of corresponding antitumor drugs.
2016-05-04T20:20:58.661Z
2015-03-05T00:00:00.000
{ "year": 2015, "sha1": "3355a12fdcc16b813c0659a0f6f2d625e3c6f02e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24032", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3355a12fdcc16b813c0659a0f6f2d625e3c6f02e", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [] }
249964123
pes2o/s2orc
v3-fos-license
Permafrost and Draining Lakes in Arctic Alaska In the Arctic, the ground is frozen most of the year. Only the top layer of soil thaws each summer. This frozen ground, called permafrost, contains a lot of frozen water (ice). There are many small lakes in the Arctic, in low spots formed from melted ice. But melting ice does not just create lakes, it can destroy them too. Melting permafrost can create gullies that let the water drain out of a lake. Most lakes in the Arctic are far from where people live, so we watch them using pictures taken from satellites. Recently, we have seen the water drain out of many lakes, which can affect plants and animals. We measure the number and size of drained lakes caused by thawing permafrost to understand how the Arctic is changing. WHERE CAN WE FIND PERMAFROST? Permafrost is found in the cold parts of the world. Large parts of Canada and Alaska have permafrost (Figure ). Our map shows two permafrost zones that depend on how much of the land is underlain by permafrost. In the continuous permafrost zone permafrost is present CONTINUOUS PERMAFROST ZONE: Regions of the Earth where permafrost covers most (more than %) of the land area. nearly everywhere. In the discontinuous permafrost zone, permafrost DISCONTINUOUS PERMAFROST ZONE: Regions of the Earth where permafrost is present but covers less than % of the whole land area. is missing in warm places, such as sunny hillslopes. In the Arctic-the part of the world so far north that it is too cold for trees to live-most of the land has continuous permafrost. Figure Permafrost map of North America. The colors represent zones that di er in how much land area is underlain by permafrost: continuous permafrost ( -% permafrost cover) and discontinuous permafrost ( -% permafrost cover). ICE IN THE GROUND Permafrost usually contains ice, known as ground ice. Where the ground is mostly sand and gravel, ground ice usually fills just the spaces between the grains. Where the ground is solid rock, ice is usually found only in the cracks in the rock. Where the ground is made of very small particles like silt and clay, ground ice is the cement that holds the ground together. Ground ice can also exist as masses of nearly pure ice. The most common kind of pure ice in permafrost is ice wedges, which create a distinctive pattern of polygons on the ICE WEDGE: Ice in the ground that forms by repeated freezing and thawing. A contraction crack in permafrost fills with meltwater that freezes to form a wedge. landscape ( Figure ). Ice wedges appear wedge-shaped when seen from the side, and when seen from above they form a network that follows the polygon outlines. (B-F) The steps in the formation of ice wedges, as seen from the side. In winter the ground cracks, then in the spring the cracks fill with meltwater that freezes and lasts through the summer. In the following winter, the ground cracks in the same place and the wedge grows wider. The way that ice wedges form was first discovered over years ago [ ]. Ice wedges form in permafrost by freezing and thawing over many years' time. In the winter, the permafrost and the frozen active layer form a solid mass that contracts (shrinks) as it gets colder. If it contracts enough it will crack, usually in a pattern of polygons that reminds us of kids.frontiersin.org June | Volume | Article | cracks in dried mud. Mud cracks also form by contraction as the mud dries. The di erence is that the ice-wedge polygons are much larger than mud-crack polygons. Ice-wedge polygons are usually about -m ( -feet) across ( Figure ). When winter ends, the snow melts but the ground is still cold and the cracks are still open, so water flows into the cracks and freezes. Down in the permafrost, this narrow wedge of ice survives the summer thaw. The crack is a weak spot that is likely to crack again in the future, so the ice wedge will grow wider over time. In some places the wedges grow so wide that over half of the ground near the top of the permafrost is made of ice wedges. When permafrost contains a lot of ice it is called ice-rich permafrost. ICE-RICH PERMAFROST: Permafrost that contains so much ice that it sinks or flows when it thaws. PERMAFROST THAW LAKES Ice-wedge polygons are one of many unique permafrost landforms. Another permafrost landform closely tied to ice-wedge polygons is the thaw lake ( Figure ). Thaw lakes were first given this name and THAW LAKE: A lake that formed where the land sank due to melting of ice in the ground. described over years ago [ ]. They form when ice-rich permafrost thaws in flat, wet environments. Thawing of permafrost causes the ground surface to sink down. In places with many large ice wedges, the ground can sink enough to form a basin for a lake, which is then called a thaw lake. Thawing of permafrost can begin with just a small pond. The water in the pond warms up by absorbing energy from the sun, causing ground ice below and next to the pond to thaw. Over time the pond becomes bigger and deeper, absorbing more energy from the sun and melting more ice. This process drives itself to speed up as time goes on. Thaw lakes are usually shallow, but they can grow to be more than km (half a mile) across. . Lake ( ) was still full of water, and appears black. The circle at ( ) outlines the mud bottom of a former lake that drained just before the image was taken. The circle at ( ) outlines a lake that drained in , and vegetation has begun to grow on the lake bottom. The circle at ( ) outlines a lake that drained long ago. The outlines of many other drained lakes are visible; all drained before our earliest aerial photographs from the s. The banks of thaw lakes do not hold back the water very well. Water from the lake can quickly cut a trench by melting the ground ice and washing away the soil. Water can flow out of the lake through a new trench, draining most of the water in just a few days. In places with thaw lakes, you can often see the outlines of lakes that drained both recently and long ago. These are most easily seen on pictures taken from airplanes or satellites (Figure ). The study of the earth using kids.frontiersin.org June | Volume | Article | images taken from above is called remote sensing. Remote sensing REMOTE SENSING: The study of land or objects from a distance using cameras or other instruments. An example is the study of the Earth using images taken from satellites. is very useful for the study of permafrost in remote places like the far north. It is especially interesting to study images taken at di erent times to see what changes have happened. Many thaw lakes have drained in northern Alaskan parks in recent years. National Park Service scientists use satellite images taken every summer to measure the number and size of drained lakes caused by thawing permafrost, and to determine the year when they drained. We use computers to examine hundreds of satellite images. We found that lakes drained in and in , while no more than had ever drained in any single year since . There are fewer lakes now than there were on the earliest historical images that we have (photographs from airplanes), which were taken years ago [ -]. In the northern plains of Bering Land Bridge National Preserve, the area of lakes is now almost % less than it was years ago [ ]. The weather in the northern Alaska parks has been warmer and snowier than normal in the past few years [ ]. This has made it easier for new gullies to form and drain the lakes. The recent unusual weather is probably due to both natural year-to-year variations combined with long-term climate change. For centuries, lakes in the Arctic have been forming and draining [ ]. However, climate change is probably causing them to drain more often now than they did in the past. When the climate warms, ice wedges that were solidly frozen in permafrost are more likely to melt and create new gullies and small ponds. When lakes drain, they can no longer provide homes for water birds such as loons and ducks. On the other hand, animals can graze the lush new vegetation that grows on the bottoms of drained lakes. We have seen caribou, geese, and even grizzly bears grazing on drained lake bottoms. We will keep studying drained lakes to help us understand how the loss of lakes a ects the plants and animals that live in the Arctic. Permafrost is very vulnerable to change as the climate warms. Draining lakes are just one example of how thawing of permafrost is changing the Arctic. FUNDING This work was funded by the National Park Service's Inventory and Monitoring Program.
2022-06-24T15:23:10.523Z
2022-06-22T00:00:00.000
{ "year": 2022, "sha1": "831071fbaece2c41a6728e5123f90651742e19ab", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frym.2022.692218/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "ea130eeb6cbabcd0686a4b5313f5dd0ffd5bf8fd", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
61221100
pes2o/s2orc
v3-fos-license
Fine grained event processing on HPCs with the ATLAS Yoda system High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers. Introduction With the increased data volume recorded during LHC Run2 and beyond, it becomes critical for the experiments to not only efficiently use all CPU power available to them, but also to leverage computing resources they don't own. From this perspective, high performance computing resources (HPC) are very valuable for HEP experimental computing. Due to the massive scale of many HPC systems, even fractionally small utilization of their computing power can yield large returns in processing throughput. Let us consider one example: Edison supercomputer at the National Energy Research Scientific Computing Center (NERSC), Berkeley, USA. With 130K Intel Haswell CPU cores, this machine was #25 on the TOP 500 Supercomputer Sites list in November 2014. One such machine, if hypothetically fully available for the ATLAS experiment [1], could satisfy all ATLAS needs in Geant4 [2] simulation (∼5 billion events/year). Porting of regular ATLAS workloads (e.g. simulation, reconstruction) to HPC platforms does not come for free. For efficient usage of HPC systems the application needs to be flexible enough to adapt to the variety of scheduling options -from back-filling to large time allocations. In ATLAS this issue has been addressed by implementing a new approach to the event processing, a fine-grained Event Service [3], in which the job granularity changes from input files to individual events or event ranges. After processing each event range, the Event Service saves the output file to a secure location, such that Event Service jobs can be terminated practically at any time with minimal data losses. Another requirement for efficient running on HPC systems is that the application has to leverage MPI mechanisms in order to be able to run on many compute nodes simultaneously. For this purpose we have developed an MPI-based implementation of the Event Service (Yoda), which is able to run on HPC compute nodes with no internet connectivity with the outside world. In section 2 of this paper we describe the concept and the architecture of the Event Service. Section 3 describes the implementation details of Yoda and the flexibility it offers in choosing between available job scheduling strategies (back-filling vs allocation). Finally, in section 4 we present the current status of Yoda developments and the results of scaling Yoda up to 50K parallel event processors when running ATLAS Geant4 simulation [4] on the Edison supercomputer at NERSC. ATLAS Event Service A new implementation of the ATLAS production system [5] includes the JEDI (Job Execution and Definition Interface) extension to PanDA [6], which adds a new functionality to the PanDA server to dynamically break down the tasks based on optimal usage of available processing resources. With this new capability, the tasks can now be broken down at the level of either individual events or event clusters (ranges), as opposed to the traditional file-based task granularity. This allows the recently developed ATLAS Event Service to dynamically deliver to a compute node only that portion of the input data which will be actually processed there by the payload application (simulation, reconstruction, data analysis), thus avoiding costly prestaging operations for entire data files. The Event Service leverages modern networks for efficient remote data access and highly-scalable object store technologies for data storage. It is agile and efficient in exploring diverse, distributed and potentially short-lived (opportunistic) resources: "conventional resources" (Grid), supercomputers, commercial clouds and volunteer computing. The Event Service is a complex distributed system in which different components communicate to each other over the network using HTTP. For event processing it uses AthenaMP [7], a process-parallel version of the ATLAS simulation, reconstruction and data analysis framework Athena. A PanDA pilot starts an AthenaMP application on the compute node and waits until it goes through the initialization phase and forks worker processes. After that, the pilot requests an event-based workload from the PanDA JEDI, which is dynamically delivered to the pilot in the form of event ranges. The event range is a string which, together with other information, contains positional numbers of events within the file and an unique file identifier (GUID). The pilot streams event ranges to the running AthenaMP application, which takes care of the event data retrieval, event processing and output file producing (a new output file for each range). The pilot monitors the directory in which the output files are produced, and as they appear sends them to an external aggregation facility (Object Store) for final merging. Yoda -Event Service on HPCs Supercomputers are one of the important deployment platforms for Event Service applications. However, on most HPC machines there is no internet connection from compute nodes to the outside world. This limitation makes it impossible to run the conventional Event Service on such systems because the payload component needs to communicate with central services (e.g. job brokerage, data aggregation facilities) over the network. In Summer 2014 we started to work on an HPC-specific implementation of the Event Service which would leverage MPI for running on multiple compute nodes simultaneously. To speed up the development process and also to preserve all functionality already available in the conventional Event Service, we reused the existing code and implemented lightweight versions of the PanDA JEDI (Yoda, a deminitive Jedi) and the PanDA Pilot (Droid), which communicate to each other over MPI. Figure 1 shows a schematic of a Yoda application, which implements the master-slave architecture and runs one MPI-rank per compute node. The responsibility of Rank 0 (Yoda, the master) is to send event ranges to other ranks (Droid, the slave) and to collect from them the information about the completed ranges and the produced outputs. Yoda also continuously updates event range statuses in a special table within an SQLite database file on the HPC shared file system. The responsibility of a Droid is to start an AthenaMP payload application on the compute node, to receive event ranges from Yoda, to deliver the ranges to the running payload, to collect information about the completed ranges (e.g. status, output file name and location) and to pass this information back to Yoda. Yoda distributes event ranges between Droids on a first-come, first-served basis. When some Droid reports completion of an event range, Yoda immediately responds with a new range for this Droid. In this way, Droids are kept busy until all ranges assigned to the given job have been processed or until the job exceeds its time allocation and gets terminated by the batch scheduler. In the latter case, the data losses caused by such termination are minimal, because the output for each processed event range gets saved immediately in a separate file on the shared file system. Connection with PanDA In order to use Yoda for running ATLAS production workloads, it has to be connected with the ATLAS production system. For this purpose we have developed a special version of the PanDA Pilot (runJobHPC), which provides such connection. The runJobHPC application runs on the interactive compute nodes of HPC systems. Thus, it is able to communicate with central PanDA services over the network. The runJobHPC application pulls job definitions from the PanDA server and stages in all required input files on the HPC shared file system. It submits Yoda jobs to the HPC batch queue, monitors their statuses and also streams out the output files to an external aggregation facility (Object Store), where they are used by separate merge jobs for producing final outputs. Job scheduling options Yoda is flexible in defining duration and size of MPI jobs. We have successfully scaled Geant4 simulation within the Yoda system up to two thousand ranks and have not observed any performance penalties coming from the MPI communication between the ranks. No slowdowns in the average event processing time have been detected either. On the other hand, the fact that event processors within Yoda write a new output file to the shared file system for each event range gives us the flexibility of preemption without the application need to support or utilize check-pointing. Yoda jobs can be terminated practically at any time with minimal data losses; only the data corresponding to event ranges currently being processed will be lost. This means that Yoda jobs can be submitted to the HPC batch queue in a back-filling mode: the runJobHPC application can detect the availability of a number of HPC compute nodes for certain period of time and promptly submit a properly sized Yoda job to the batch queue for utilizing the available resources. Development status and performance tests We have chosen ATLAS Geant4 simulation as a first use-case for Yoda (and for the Event Service in general). Simulation jobs used more than half of ATLAS CPU-budget on the Grid in 2014. Thus, by offloading simulation to other computing platforms (e.g. HPC, clouds), we can free a substantial amount of ATLAS Grid resources. Also, simulation is a CPU-intensive application with minimal I/O requirements and relatively simple handling of meta-data. These characteristics allowed us to make rapid progress in the development of a first version of the Event Service, which was delivered in Summer 2014. After that we switched our development efforts to Yoda. By reusing the code of the conventional Event Service, we developed a first working implementation of Yoda in October 2014 and presented it at Supercomputing-2014 in the DOE ASCR demo 1 . In early 2015, Yoda was validated by running a series of ATLAS Geant4 production jobs. The output of these jobs was compared to the output of the same simulation jobs on the Grid. The comparison confirmed that Geant4 simulation within Yoda on HPC and the simulation on the Grid produce the same results. Also in early 2015 we ran a series of tests on Edison supercomputer with the goal to check how the performance of Yoda scales with the number of MPI-ranks. The results of these tests are presented on Figure 2, which shows very good scaling of Yoda up to 50K CPU-cores (more than 2K MPI-ranks). The key to such good scaling was to avoid heavy load on the Edison shared file system. This was achieved by delivering ATLAS software releases to the RAM of each compute node. Otherwise the initialization time of Yoda payload applications (AthenaMP) would not scale past 100 MPI-ranks. Summary We have developed Yoda, an MPI-based implementation of the Event Service, specifically for running ATLAS workloads on HPCs. ATLAS Geant4 simulations within Yoda have been successfully validated for physics, which proves that Yoda is ready to run ATLAS simulation production workloads on supercomputers. Thanks to its flexible architecture, Yoda allows efficient usage of available HPC resources by running jobs either in large time allocations or in back-filling mode. The performance tests have demonstrated that Yoda scales very well with the number of MPI-ranks, which makes it possible to efficiently run Yoda applications on thousands of HPC compute nodes simultaneously.
2017-09-12T18:49:07.080Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "2eff4109841fd4952c6b69de6fa7bbc7b99ceabb", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/664/9/092025/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9aa0e9aa3e20ae1c629eed4c56ad003e0bade910", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
158223299
pes2o/s2orc
v3-fos-license
Willingness to Pay for Improved Household Solid Waste Collection in Blantyre, Malawi : Insufficient staff, inappropriate collection vehicles, limited operating budgets and growing, hard to reach populations mean that solid waste management remains limited in most developing countries; Malawi is no exception. We estimated the willingness to pay (WTP) for two hypothetical solid waste collection services. Additionally, we tested the impact of the WTP question positioning relative to environmental perceptions on respondents’ WTP. The first scenario involved a five minute walk to a disposal facility; the second scenario involved a 30 min walk. Additionally, the order of the question was randomized within the questionnaire. A WTP value of K1780 was found for the five minute walk scenario when the question was placed first, and K2138 when placed after revealing the respondent’s perceptions on the environment. In the 30 min walk scenario, WTP was K945 when placed first and K1139 when placed after revealing the respondent’s perceptions on the environment. The estimated values indicate that there is both a willingness to pay for solid waste services and that there are at least two options that would be acceptable to the community; a pilot scale implementation would be required to validate the hypothetical values, especially given the dependency on problem framing. Community financing should be considered as a sustainable approach to solid waste management in underserved areas. Introduction Adequate management of solid waste should promote minimum waste generation, and include regular collection, voluntary separation, safe and adequate storage, effective treatment and safe disposal (UN-Habitat 2012).Poor waste management reduces the quality of life by providing food and breeding conditions for vermin and disease vectors, producing odor, diminishing aesthetics and contaminating surface and ground water (Hoornweg and Bhada-Tata 2012). Financing for the transport, human resources, and facilities that are required is usually supplied via some sort of tax base, but in resource-poor environments, solid waste management (SWM) is usually under-funded and as a result, poorly managed. Located in the southern region of Malawi, Blantyre is a commercial city as well as the second largest city in the country.The population of Blantyre was 661,256 in 2008(National Statistical Office 2008).Over 70% of the population lives in unplanned areas, which occupies 23% of the land area in Blantyre (UN-Habitat 2012).Although there is collection (and semi-controlled dumping) of waste in formal areas, informal areas are left unserved (Palamuleni 2002;Government of Malawi 2010;Maoulidi 2012;Maganga 2013;Barre 2014).Poor solid waste management is common in Malawi (indeed in most developing countries), and is partly a result of inadequate financing, institutional will and capacity (Maganga 2013;Barre 2014). Where waste is not collected by the city, it is left on road sides and river banks, which has resulted in surface and ground water contamination (Palamuleni 2002).Given the lack of institutional support, some cities have adopted independent or community-led solutions.The challenge however remains knowing how much, if anything, residents would pay to an independent contractor to fill the role that is left empty by the authorities (i.e., regular collection, transport and disposal). The value of a good or service can be solicited through the good or service's revealed preference or stated preference.A revealed preference is estimated by how much is actually paid or spent on a good or service, i.e., the worth is revealed by present actions.A stated preference, on the other hand, is theoretical and though realistic in the respondent's mind, may not be true once tested, or revealed. Stated preference methods are often used to compare the costs and benefits of policy changes before they actually happen (DEFRA 2007).The ultimate goal is to estimate the total economic value of a good or service which does not have a pre-determined market price, such as solid waste collection, though it has not been used extensively for this specific purpose (Breffle et al. 1998).A variety of methods exist, but the double-bound dichotomous choice method is relatively quick and simple for the respondent (compared to a choice experiment) and generates a more precise range for willingness to pay (WTP) (DEFRA 2007;Cameron and Quiggin 1994;Lopez-Feldman 2012).By using the dichotomous choice format, accuracy is increased as more data points are fitted to the function for willingness to pay.Clear boundaries are yielded from the sequential bid offers (Cameron and Quiggin 1994). In Malaysia, contingent valuation was used to estimate the benefits of improved solid waste management in Kuala Lumpur: households were willing to pay slightly more for the system involving voluntary source separation than for the system where it was mandatory, though the difference was insignificant (Afroz and Masud 2011).In Malawi, dichotomous contingent valuation was used to determine the willingness to pay for solid waste collection in Lilongwe, and was found to be K92 per household per month (Maganga 2013). Although stated preference methods can provide results that are exaggerated up or down (Bateman et al. 2001;Hensher 2010), there is limited, but growing evidence to show that preferences obtained through valuation are useful in revealing an individual's perceptions towards policies that have not yet been implemented (Tilley and Günther 2016).In the environmental sanitation sector, some work has used stated preference methods for solid waste (Czajkowski et al. 2014) however, across all disciplines there are few examples of stated preferences being validated against revealed preferences, partly because few of the tested scenarios are implemented, or because they are too abstract to do so. South Lunzu is a large, fast growing area within the boundaries of Blantyre and without waste collection services.It does, however, have roads and limited coverage of water and electricity, which indicate growing wealth and a population that could consider paying for solid waste collection.Therefore, in order to determine the willingness to pay for this service, we used a double bounded dichotomous choice contingent valuation.Furthermore, we compared the willingness to pay for a self-collection service with that of a kerbside (roadside) collection service and tested the impact of environmental framing with regard to the stated WTP value. Findings from this research can be used to identify opportunities for recycling, improve environmental conditions, create business opportunities in waste management, create employment, and further increase investments in the solid waste value chain. Methodology A dichotomous choice questionnaire was used to solicit WTP estimates; socioeconomic characteristics, household practices and opinions about SWM, as well as questions related to concern for the environment were also collected (Afroz and Masud 2011;Maganga 2013).The detailed questionnaire is in Appendix A. Starting Bids In order to solicit the WTP for solid waste management, a dichotomous choice contingent valuation method was used.First, the respondent was presented with a scenario for solid waste collection and a fee that would be paid for the service.If the response was yes to this question, a second question followed, and this offer was double the amount of money presented in the first question.If the respondent refused to pay the amount presented in the first question, then the second question contained half of the initial amount presented (Bateman et al. 2001;Cameron and Quiggin 1994). The questionnaire was pre-tested on a similar sample prior to the final data collection.The sample population for the pre-test was randomly selected from the same area of interest.The pre-test results were used to refine the contingent valuation questions as well as derive parameters that were used to calculate the actual sample size.The pre-test was also used to train enumerators.Respondents were asked to comment on the clarity as well as the difficulty of the questions at the end of each interview session during the pilot study.The questionnaires in both the pre-test and the final data collection were administered in person by trained enumerators. Uncertain respondents have a tendency to focus their response near to the suggested amount in single bounded dichotomous choice questions.If poorly planned, double bounded dichotomous choice valuation may result in WTP values centered around the suggested amounts.This kind of respondent bias is known as anchoring.The kind of question design that halves or doubles the suggested amount in the initial question reduces the impact of starting point bias which is caused by uncertain respondents that anchor their willingness to pay on the bid amount presented in the first question (Carmona-Torres and Calatrava-Requena 2006).Doubling the amount presented in the initial bid ensures that the follow up bid crossed the respondent's anchoring threshold, as such, the bids represent both extremes of WTP. In both the pre-test questionnaire and the final questionnaire, there were five starting bids (Carmona-Torres and Calatrava-Requena 2006;Honu 2007).During the pre-test, the initial bids were K1000, K3000, K5000, K7000 and K9000.The follow up bids covered values from K500 to K18,000 (at the time of writing the exchange rate was approximately K750/$USD and GDP was 1169 (current international $, 2016) (World Bank. 2018).The bid amounts in the pre-test questionnaire were spread widely so as to include as many WTP values as possible.The distribution of "yes" responses from the pre-test was measured as the probability of an individual agreeing to participate at that fee.The pilot results showed that most respondents were willing to participate when the amounts offered were between K500 and K10,000.In the final questionnaire the initial bid amounts used were K1000, K2000, K3000, K4000 and K5000, and thus the follow up bids had a maximum amount of K10,000 as shown in Table 1. Hypothetical Solid Waste Management System In one set of the questionnaires, the hypothetical scenario and willingness to pay questions were presented before any other questions (e.g., demographic) and in the other set, the hypothetical scenario and WTP questions were presented after the section on opinions towards present solid waste management practice.One of the hypothetical waste collection systems was a kerbside (roadside) collection system whereby the household would be required to place waste containers on the road side near the house for collection on a specific day of the week.The waste would then be transported to a transfer station or disposal site by bicycle carts and small trucks.Payment would be on monthly basis to a community-managed fund that would cover the cost of operations. The second system would require household members to carry the waste to a transfer station.An appointed entity would then transport the waste out of the area.The payment and management method would remain the same in both scenarios.Additionally, the WTP questions were randomized, so that they were placed either before or after some questions about the respondents' views on the environment.For example, we asked: "Among the following environmental issues, which of these deserves the most attention: water pollution, air pollution, deforestation, solid waste management, etc." which primed the respondent for thinking about the environment and their priorities.Altering the placement of the question allowed us to test the impact of framing, i.e., preparing the respondents' mind to more fully consider the implications of their choice. A summary of the scenarios is presented in Table 2. Sample Design Ten households were selected from each of the five internal subdivisions in South Lunzu for a sample of 50 households for the pilot study.The tenth household from a chosen junction in a street was selected as the first household in the sample.Subsequent households along the street were selected using the same interval of ten, whilst alternating sides of the street, until the tenth household in the area was selected. The final sample size was calculated using the standard deviation (σp) and the standard error (E) of the WTP from the pilot (Triola 2001). From the pilot study, mean WTP for scenario one was K3315 with a standard deviation of K4491 and a standard error of K717.As a result, the target minimum sample size for the final study was calculated to be 841 households.In scenario two, the pilot study resulted in a mean WTP of K3493 with a standard deviation of K3459 and a standard error of K567, giving a minimum sample size of 1412 households.Due to time and resource limitations 1250 households were included in the final study.Like in the pilot survey, the 10th household on a street was also selected from each sub-division.Using the interval of 10, a total of 250 households were sampled from each of the five selected sub-divisions. Analysis The Dichotomous Choice Contingent Valuation model was proposed by Hanemann et al. (1991) and further developed by Cameron and Quiggin (1994).The "doubleb" command developed by Lopez-Feldman (2012) in STATA version 12 was used to estimate the WTP and the impact of other variables on WTP.The "doubleb" command estimates maximum likelihood under the assumption of normality. The Dichotomous Choice Contingent Valuation model as developed by Cameron and Quiggin (1994) assumes normal distribution of WTP and that the WTP of individual i can be modelled as the following linear function: where z i is a vector of explanatory variables, β is a vector of their corresponding coefficients and u i is random error term.It is expected that the individual (i) will answer "yes" when WTP i is greater than or equal to the suggested amount (t i ), (i.e., when WTP i ≥ t i ) and will answer "no" when WTP i is less than the suggested amount (t i ), (i.e., when WTP i < t i ). To find β, the following maximum likelihood function was used Economies 2018, 6, x FOR PEER REVIEW 5 of 24 Lopez-Feldman (2012) in STATA version 12 was used to estimate the WTP and the impact of other variables on WTP.The "doubleb" command estimates maximum likelihood under the assumption of normality. The Dichotomous Choice Contingent Valuation model as developed by Cameron and Quiggin (1994) assumes normal distribution of WTP and that the WTP of individual i can be modelled as the following linear function: where zi is a vector of explanatory variables, β is a vector of their corresponding coefficients and ui is random error term.It is expected that the individual (i) will answer "yes" when WTPi is greater than or equal to the suggested amount (ti), (i.e., when WTPi ≥ ti) and will answer "no" when WTPi is less than the suggested amount (ti), (i.e., when WTPi < ti). To find β, the following maximum likelihood function was used where di sn , di ss , di ns , di nn are indicator variables that take the value of one or zero depending on the response from each individual.di sn takes the value "1" if the respondent says "yes" to the first question and "no" to the second question.di ss takes the value of "1" when the respondent answers "yes" to both questions.di ns takes the value "1" when the respondent answers "no" to the first question and "yes" to the second question.di nn takes the value "1" when the respondent answers "no" to both questions.From the setup of the questions, the responses will generate a value of 1 in only one part of Equation ( 2), and the rest will take the value of 0. The respondent will thus contribute to the logarithmic function in only one of its parts as all the other parts will be equal to 0. After finding β and σ, the WTP for each individual was estimated using Equation (3): Ethical Considerations All respondents gave verbal consent to participate in the study.Due to low levels of literacy in the area, the questions were read out to the respondents in the language they were more comfortable with (English or the local language, Chichewa).Permission to conduct the study was granted from local leaders (chiefs).A local representative was appointed to accompany the enumerators wherever it was deemed necessary by the local leadership. There were several cases where household members requested consultation from the local leaders or head of households before participating.These households were revisited later on an agreed day.The respondents' names were not recorded, and a unique ID number identified households. (2) where d i sn , d i ss , d i ns , d i nn are indicator variables that take the value of one or zero depending on the response from each individual.d i sn takes the value "1" if the respondent says "yes" to the first question and "no" to the second question.d i ss takes the value of "1" when the respondent answers "yes" to both questions.d i ns takes the value "1" when the respondent answers "no" to the first question and "yes" to the second question.d i nn takes the value "1" when the respondent answers "no" to both questions.From the setup of the questions, the responses will generate a value of 1 in only one part of Equation ( 2), and the rest will take the value of 0. The respondent will thus contribute to the logarithmic function in only one of its parts as all the other parts will be equal to 0. After finding β and σ, the WTP for each individual was estimated using Equation (3): (3) Ethical Considerations All respondents gave verbal consent to participate in the study.Due to low levels of literacy in the area, the questions were read out to the respondents in the language they were more comfortable with (English or the local language, Chichewa).Permission to conduct the study was granted from local leaders (chiefs).A local representative was appointed to accompany the enumerators wherever it was deemed necessary by the local leadership. There were several cases where household members requested consultation from the local leaders or head of households before participating.These households were revisited later on an agreed day.The respondents' names were not recorded, and a unique ID number identified households. Results We obtained 1256 valid responses.A breakdown of the household characteristics is presented in Table 3.In order to determine the quantity of waste generated, respondents were shown a two liter basin and asked how many basins of that size they would fill up each day.They were also asked to estimate how long it took them to travel to their current waste disposal site. Willingness to Pay In both scenarios, the number of yes responses decreased as the initial bid amount was increased.The highest number of yes responses was made to the lowest initial bid.The five minute walk scenario had more yes responses to the initial bid compared to the 30 min walk scenario.Figure 1 Willingness to Pay In both scenarios, the number of yes responses decreased as the initial bid amount was increased.The highest number of yes responses was made to the lowest initial bid.The five minute walk scenario had more yes responses to the initial bid compared to the 30 min walk scenario.Figure 1 presents a summary of the trend in responses.WTP was estimated to be K716 when the walking time to the disposal site was 30 min (scenario two).When the walking time to the disposal site was reduced to five minutes (scenario one), the average WTP increased to K1980.These were the averages calculated while all other variables were held constant. Table 4 shows the impact of the covariates on WTP estimations: Position of valuation question (first in questionnaire = 0), age (years), gender (male = 1), education (some formal education = 1), house owners (owners = 1), years in present house (years), years in South Lunzu (years), employment status (employed = 1), household income (K per month), walking with bag of waste (walk = 1, onsite = 0), satisfaction for SWM (satisfied = 1), ranking for SWM compared to other priorities (SWM highest ranked = 1, other position = 0).WTP was estimated to be K716 when the walking time to the disposal site was 30 min (scenario two).When the walking time to the disposal site was reduced to five minutes (scenario one), the average WTP increased to K1980.These were the averages calculated while all other variables were held constant. Odd numbered columns are for the five minute walk scenario and even numbered columns are for the 30 min walk scenario.The table also illustrates how personal and household traits along with environmental variables influence WTP.Columns 1 and 2 represent the findings from the base model where WTP was calculated without any covariates.Columns 2 and 4 include the respondent and households' characteristics.Columns 5 and 6 include covariates related to the present practices in solid waste management.In Columns 7 and 8 all variables were taken into consideration. As shown in Columns 3-8 was a difference in WTP depending on the placement of the questions: when placed later in the questionnaire, WTP was K2139 for scenario one and 1780 for scenario two.When the valuation question was placed first, WTP was K946 for scenario one and K1139 for scenario two. For a unit increase in age, there was a decrease in WTP of −58 in scenario one and −34 in scenario two, with both decreases being significant (Columns 3 and 4) when only demographic information is included in the model.The unit decrease in WTP changed to −22 and −7 for scenario one and two respectively when all variables are included (Columns 7 and 8). Women had a higher WTP than men in all instances of the model (all columns).A difference in WTP of K986 was obtained for scenario one (Column 7) and K927 for scenario two.The difference in both scenarios was found to be significant. For each unit increase in the number of years a family had stayed in South Lunzu, there was an increase in WTP of K142 and K88 for scenario one and scenario two respectively (Column 3 and 4).However, for every unit increase in the number of years a family had stayed in South Lunzu, there was a decrease in WTP of −45 and −23 for scenario one and scenario two respectively (Column 3 and 4). Columns 5 and 6 indicate that those who were satisfied with the present practice of solid waste management had a higher WTP that those unsatisfied with a difference of K1855 (significant at 95 percent confidence) and K428 for scenario one and scenario two respectively.Sixty-one households were not satisfied with the present practice and 1172 respondents were satisfied.Thus, the disparity in WTP can be attributed to the number of respondents that were satisfied being greater than those unsatisfied. The key differences between the two solid waste collection programs was the time taken to transport waste to the disposal site and the frequency at which waste is taken to the disposal site.Respondents had a higher WTP for the scenario with a shorter walking time despite having their waste collected once a week. Placing the valuation question before all other questions resulted in a lower WTP value compared to placing it after obtaining the respondents' perceptions on waste management.This result is an indication that other questions influence the respondent's choices.A pilot implementation would be required to select the position that gives the most accurate willingness to pay value. Details of the differences in WTP across social groups within the sample are in Appendix B. Key differences to note were those within the gender groups, education levels, house ownership, and employment.Table 5 highlights some of the key differences in WTP between groups from the sample.Educated respondents had a higher willingness to pay than those without any form of education.Respondents without any formal education had a negative WTP, which suggests that they have an intention of receiving money, and which may indicate they were unclear about the bidding system or that they were actually seeking payment.Those with tertiary education had a willingness to pay of K2210 to walk five minutes to the disposal point and K1233 to walk 30 min, and those without tertiary education had WTP of K1770 for the five minute walk scenario and K905 for the 30 min walk scenario. In the five minute walk scenario, those who were employed had a WTP of K1949 which was higher than the WTP of K1927 that those who were unemployed offered, although the difference was only K22 and not significant.However, the difference rose up to K359 with those unemployed having a higher WTP of K1297 and those employed having a lower WTP of K937.It should be noted as well, that the baseline walking time was nearly seven minutes (Table 3).Thus, the willingness to pay as well as an additional amount of time to access proper solid waste disposal is a significant finding.In the distribution of WTP across forms of employment, employees of non-governmental organizations had the highest average WTP, seconded by students.Government employees had the lowest WTP among the forms of employment considered.The trend in Figure 2 suggests a possible correlation between employers, household income and WTP values. the distribution of WTP across forms of employment, employees of non-governmental organizations had the highest average WTP, seconded by students.Government employees had the lowest WTP among the forms of employment considered.The trend in Figure 2 suggests a possible correlation between employers, household income and WTP values. Discussion A WTP value of K716 per month for the 30 min walk scenario shows that there is a potential of raising up to K8786 per household per year for solid waste collection in South Lunzu.WTP for the five minute walk scenario presents an opportunity to raise up to K23,763 per household annually for solid waste collection.The population of South Lunzu was 38,966 in 2012 (UN-Habitat 2012), translating to approximately 7640 households and probably more than 9000 by 2018 (official statistics are not available).Thus, an annual revenue of between K67,124,428 and K181,549,320 could be collected if all households in South Lunzu were to participate (based on a population of 38,966).This not only presents an opportunity for financing solid waste collection, but also business creation and employment within the solid waste value chain. Furthermore, the residents of South Lunzu are willing to segregate their waste provided the incentives are in place.This willingness presents an opportunity to reduce the quantity of waste that arrives at the dump site, provided the necessary treatment technologies and systems are in place (e.g., composting, recycling, etc.).However, for the benefits of these model systems to be realized, a comprehensive study would have to be carried out to identify the composition of the solid waste. Out of the households, 55.89% were found to be willing participants in solid waste collection.While it is true that over 44% of respondents were not willing to contribute (WTP = 0), the average salary among respondents was about $150/month (Table 3) and the estimated WTP values ranged up to 2% of the monthly income: a non-significant portion of a very low income.Collectively the households willing to participate generate 42% of the waste generated by the sampled households; in the short term, a reduction in solid waste would be expected to improve the quality and aesthetics of the local environment, and in the long term, cause a trickle-down effect, ideally encouraging non-participants to join. Women had a higher WTP than men, enforcing the oft-held but rarely quantified notion that women have a higher sensitivity to their environment than men.In essence, this would highlight a need to involve more women in the management of solid waste as this suggests potentially higher commitment towards living in clean surroundings.However, this can only be verified through a pilot implementation. The difference in the WTP amounts between those who had stayed in the same house for longer and those who had changed houses could be because of the potential change in the household environment when the family moves to a new house.Those who have stayed in the same house for long may expect to stay there longer and thus are more concerned about their home.Those who have not stayed as long in South Lunzu are perhaps more transient and may be intending to move from their present house in South Lunzu, resulting in lower WTP.We found no correlation between the current disposal practices and the WTP values. The significant difference in WTP based on the position of the valuation question indicates that the design of a WTP survey should take into consideration the structure of the questionnaire itself.A pilot trial would be required in order to ascertain which position of the question in the questionnaire obtains more accurate willingness to pay.This symbolizes the tradeoff that the people are willing to make in order to improve solid waste management in the area. Conclusions Our results indicate that there is a willingness to pay for improving solid waste collection in South Lunzu.However, the amount of money the people were willing to pay depended on several factors.Increasing income led to higher WTP values, which is to be expected.Older people had a lower WTP compared to younger people.Respondents who had some education had a positive WTP and those without any formal education had a negative WTP.Such variations in WTP show that the social characteristics of respondents as well as demographic characteristics are critical in revenue generation from waste collection.Given the variation, the lowest acceptable WTP value would likely have to be piloted to ensure the broadest rate of acceptance. When designing the solid waste collection systems, walking distance is a crucial factor in its success.The higher WTP for walking five minutes to the disposal point signifies a preference for short distances. There is potential to improve solid waste management through improvements in solid waste collection at the community level through community managed solid waste collection schemes.Despite this, further campaigns would have to be developed and implemented to gain collective support from all community members, as well as to increase awareness of all individual's roles. The impact of environmental framing and the position of the valuation question should also be taken into consideration.Other questions may influence the respondents to give a higher or lower WTP and result in biased responses.Overall, varying the position of the WTP question is likely to identify possible influences of preceding questions on WTP.A pilot implementation of the presented hypothetical scenarios coupled with a choice experiment would identify the position of the valuation question that more accurately estimates the WTP.Practically, the pilot should be coupled with an environmental awareness campaign, since the messaging appears to have an impact on the actual WTP value. At least 56% of households gave a "yes" response for scenario with a five minute walking time (scenario one) and 38% said "yes" to walking 30 min (scenario two).Projecting this finding to the population of South Lunzu, 4263 households would agree to participate in scenario one and 2926 households would participate in scenario one.An annual total of K101,301,669 could be realized if scenario one was to be implemented and K25,707,602 if scenario two was to be implemented.This represents a potential source of finance that could be used to improve solid waste management, as well as other community amenities such as drainage. The findings also imply that policy should consider expanding the role of waste management from the city council to the local communities.In this case, the city could provide centrally located transfer stations and require that local communities mobilize themselves to raise resources for waste collection to the transfer station.Under this new strategy, the city would facilitate the formation of community-based waste management committees that would enforce waste separation and community hygiene on the council's behalf. A pilot implementation is needed in order to ascertain the actual WTP.This pilot implementation would also help identify the position of the valuation question in the questionnaire that accurately estimated the actual WTP.Among the factors to be included in the pilot would be a test for source separation of solid waste and the impact of distance on WTP. City councils that are unable to provide city-wide services should delegate their waste collection responsibilities to communities by facilitating the formation of community management committees that will collect solid waste and payments.This system would enable solid waste collection even in hard to reach areas where there is no road access. Future work making use of dichotomous choice contingent valuation should pay careful attention to the position of valuation question relative to other questions.Since the position of the question that best estimates WTP is not yet known, researchers should randomize the position of the WTP question in the questionnaires. Questions Code Responses A Valuation A4 Scenario 2: Imagine if there was a different community managed service to collect waste every week from a road junction near your house where you would walk a maximum of 30 min when carrying waste from your house to that place.Participation in this would also require you to put plastics, food waste, glass, metals and yard waste in separate plastic bags for recycling.For this to work, you would have to make a monthly contribution to a community managed account.The money will be used to finance the operation of the solid waste collection system. B12 In which income bracket does your monthly houshold income fall? Less than K5,000 K5,001 to K10,000 K10,001 to K15,000 K15,001 to K20,000 K20,001 to K50,000 K50,001 to K80,000 K80,001 to K120,000 K120,001 to K150,000 K150,001 to K200,000 K200,001 to K250,000 More than K250,001 D Environmental concern You have been given 28 beans to allocate to the list of public sectors, and another 28 beans to allocate to selected environmental issues.The number of beans you give to a sector will indicate the importance you attach to that particular sector.The more the beans you place on the sector, the more important that sector is to you. D1 From the following list which sector of public policy deserves the most attention?Indicate the level of attention as instructed above. Figure 1 . Figure 1.Response trends as bid amounts were increased. Figure 1 . Figure 1.Response trends as bid amounts were increased. Figure 2 . Figure 2. Willingness to pay (WTP) variation with income and place of employment. Figure 2 . Figure 2. Willingness to pay (WTP) variation with income and place of employment. Table 2 . Description of hypothetical scenarios. presents a summary of the trend in responses. Table 4 . Willingness to pay models. Table 5 . Willingness to pay values. Table 5 . Willingness to pay values.
2019-01-29T05:23:21.195Z
2018-10-09T00:00:00.000
{ "year": 2018, "sha1": "8bda41f16aea2dbbc9dde38c8cfffca0c3ff6052", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7099/6/4/54/pdf?version=1539760677", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8bda41f16aea2dbbc9dde38c8cfffca0c3ff6052", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
5246704
pes2o/s2orc
v3-fos-license
BEHAVIOURAL COMPARISON OF DRIVERS WHEN DRIVING A MOTORCYCLE OR A CAR : A STRUCTURAL EQUATION MODELLING STUDY The goal of the study was to investigate if the drivers behave in the same way when they are driving a motorcycle or a car. For this purpose, the Motorcycle Rider Behaviour Questionnaire and Driver Behaviour Questionnaire were conducted among the same drivers population. Items of questionnaires were used to develop a structural equation model with two factors, one for the motorcyclist’s behaviour, and the other for the car driver’s behaviour. Exploratory and confirmatory factor analyses were also applied in this study. Results revealed a certain difference in driving behaviour. The principal reason lies probably in mental consciousness that the risk-taking driving of a motorbike can result in much more catastrophic consequences than when driving a car. The drivers also pointed out this kind of thinking and the developed model has statistically confirmed the behavioural differences. The implications of these findings are also argued in relation to the validation of the appropriateness of the existing traffic regulations. INTRODUCTION Road traffic accidents have an adverse impact on all levels of society.Not only individual victims of accidents and their families are frustrated, but also their employers and society as a whole suffer certain consequences.Traffic accidents also lead to significant costs, related to health care, lost productivity of individuals, premature death of the victim, short-term or long-term disability, etc.In order to avoid frustration and all costs related to the accidents, the safety aspects are also becoming one of the most crucial non-financial factors when a decision about purchasing a new vehicle is adopted [1]. There are several organizations dealing with road safety, like the "National Highway Traffic Safety Administration" (NHTSA), established by the US Department of Transportation, the "International Traffic Safety Data and Analysis Group" (IRTAD), instituted by the OECD, etc.These organizations also administrate the annual statistics about the tragic accidents in road transport.From IRTAD reports it is depicted that only a modest success in reducing the number of fatalities during the last years was achieved [2].In general, it is true that the entire number of traffic accidents has been slowly decreasing during the last decade.However, on the other side, this does not hold for the number of deaths, which even increased in 2012 in several countries, compared to the year 2011 [2]. It is well known that the motorcyclists are one of the most vulnerable road participants [3].The amount of fatalities related to the drivers of powered two wheelers (PTW) drops more slowly than with car occupants while the entire number of motorcycle traffic accidents is unfortunately still increasing.The fact is that the riders are often involved in road accidents and can get severe injuries [4].Diamantopoulou and his colleagues [5] revealed that as many as 50% of all motorcycle accidents can end in serious injury or death.In 2013 the European Union (EU) countries recorded 3,993 fatalities in case of motorcyclists [6].At the same time, the EU authorities also reported a significant decrease of car accidents with fatal outcomes, which means 12,535 fatalities in case of car accidents [6].On the one side, the car-related deaths were 50% reduced between the years 2000 and 2012.Nevertheless, on the other hand, the mileage-related risk of being killed in a road accident is even eighteen times higher for riders than it is for other road users [7]. It is also reported that the low-mileage drivers of any age have a significantly higher crash rate than the middle-mileage drivers of the same age.Also, the latter have a considerably higher crash rate than high-D.Topolšek, D. Dragan: Behavioural Comparison of Drivers when Driving a Motorcycle or a Car: A Structural Equation Modelling Study er-mileage drivers of the same age [8].This implies that the driving experiences are also closely related to the possibility of having an accident. Reason [9] distinguishes between the possible types of human errors, which can be in general classified as slips, lapses, mistakes, and violations.The cause of a traffic accident most frequently relates to the number of trips, driving behaviour and the choice of vehicle type [10].As it turns out, in 90% of all traffic accidents their cause is hidden in the human factors [11].There are several ways to measure the driver/ rider behaviour, and according to Wåhlberg, Dorn and Kline [12], the simplest way is to ask the drivers how they typically behave.Elliot and his colleagues [13] discovered that a significant amount of research in the scholarly literature is devoted to the field of risk factors associated with the vehicle and the environment.However, on the other hand, a quite big gap was detected in a research related to the motorcyclists' accident risk. To overcome this gap, the "Motorcycle Rider Behavior Questionnaire" (MRBQ) has been designed and introduced in the study [13].Its main purpose is to measure the motorcyclist behavioural factors, such as control and traffic errors, stunts, use of safety equipment, speed violations, and others.As introduced in this study, the MRBQ questionnaire was applied to a very massive sample of 8,666 participants and was designed on the basis of taxonomy [14].The questionnaire which comprises 43 items should measure the riders' behaviour as reliably as possible.It also attempts to investigate how the behaviour is related to the crash risk.As final result of Reason and the others' work, the 5-factor model structure was derived.This study was later updated with another study carried out by Özkan and his colleagues [15].They investigated the original 5-factor model structure with Turkish riders.Also, they studied the relationships between different types of motorcyclist behaviour on the one hand and the active and passive accidents and offenses, on the other hand.They confirmed that the factor model contains five factors (speed violations, traffic errors, safety equipment, stunts, and control errors), which were apparently extracted.The analysis has also revealed high item loadings and acceptable internal consistency. The MRBQ was developed on the basis of previously designed "Driver Behavior Questionnaire" (DBQ), which is a commonly used tool in traffic psychology research.The original version of this questionnaire was based on 50 items [9], but afterwards, several other versions were also conducted, as described in the study of Mattson [16].This author introduced the 28-item-based version of the DBQ questionnaire in his work.Another frequently used version of DBQ was presented by Lajunen et al. [17], who translated an extended 27-item questionnaire [18,19] into the Dutch and Finish case.Since the latter covers all necessary aspects of car driving behaviour, this version of the questionnaire was used in our study as well. The investigation of the literature shows that only several studies included a behavioural comparison between motorcyclists and car drivers.For instance, authors Banet and Bellet [20] concluded that globally car drivers consider the particular situation as more critical than the motorcyclists.Another study, carried out by Horsewill and Helman [21], showed a slightly different findings.In this study it was concluded that there were no significant differences in risk-taking tendencies between the riders and car drivers.However, in these studies, the drivers' and riders' populations were in principle independent of each other.In other words, this means that it was unnecessary that the drivers are also the riders and vice versa. In the spirit of research introduced by Elliot et al. [13], this study was focused on the driving behaviour of motorcyclists, which are all car drivers as well.Therefore, the primary aim of the research was the examination of possible differences between a person's risk-taking tendencies when they use roads as a rider or as a driver.On this basis, two questionnaires were conducted in a survey, performed among the same population of riders.The MRBQ was applied in the sense of motorcyclist's behaviour while the DBQ was employed in the spirit of car driver's behaviour.To the best of our knowledge, nearly no similar studies have been conducted, which would investigate the link between a person's behaviour as a rider and their behaviour as a driver.For this reason, it is our belief that our research, which applied both questionnaires, MBRQ and DBQ, to the same population, could be one of the major contributions of this paper. For the purpose of research, an anonymous survey among Slovenian motorcyclists has been conducted.The indicator variables, obtained from both questionnaires were input into the statistical modelling procedure in the next step.After the preliminary use of exploratory factor analysis (EFA), the structural equation model (SEM model) was designed [22,23,24,25]. The structural equation modelling is a very advanced statistical tool, which comprises factor analysis and multiple regression analysis into a comprehensive modelling framework.SEM can be also addressed as a generalization of causal path modelling, which provides an efficient modelling mechanism to reveal the complex causal relationships between the multiple variables.In our case, the SEM procedure was used by applying two consecutive stages.In the first stage, the measurement part of the SEM model was derived by means of the confirmatory factor analysis (CFA).Afterwards, in the second stage, the structural part of the SEM model was also extracted, which enabled us to finish the development of the SEM model.All the corresponding computations were carried out with the program package IBM SPSS V21, where its extension AMOS was also used. The derived SEM model was used to study the relationship between the risk-taking behaviour, when the person is a motorcyclist, and when this same person is a car driver.The principal task of the model was to confirm statistically the subjective opinion of the target population, who claimed that they actually behave in a safer way when driving a motorcycle.If, namely, the confirmation of different behaviour is positive, the findings of the present study might be very interesting for the traffic legislature.Maybe the latter should consider again the fairness of Slovenian traffic laws, which regulate that a person is punished by losing all driving licenses in case of severe violation.For instance, why should they be disciplined by the loss of the motorbike license if the violation happened while driving a car or even a bicycle? Conceptual framework and hypothesized model Figure 1 depicts the conceptual framework associated with the hypothesized model.It can be seen that the 43-item indicators of the MRBQ questionnaire [13] were denoted by M i , i = 1,...,43 while the 27-item indicators of the DBQ questionnaire [17] were symbolized by D i , i = 1,...,27.It is supposed that such adequate model can be found, which includes two latent factors only, each related to the corresponding questionnaires (MRBQ and DBQ).One factor (named MRBQ) is linked to the motorcyclists' behaviour-based item measures while the other (named DBQ) can be expressed via the measurements of the car drivers' behaviour-based indicators.The model comprises two factors since we are only interested in identifying the possible causal relationship between rider-related behaviour and driver-related behaviour of the same person.This relation can be addressed as a part of our main hypothesis H 1 , which implies that the drivers do not behave in the same way, when they are driving a motorcycle or a car.Hypothesis H 1 also involves the positively directed impact from factor MRBQ to factor DBQ, weighted by a certain level L 1 . Sample and participants in the survey To establish the possible relationship between the different behaviour of individuals, when they drive a motorcycle or a car, an anonymous survey among Slovenian motorcyclists has been conducted.Each motorcyclist filled out the MRBQ questionnaire, DBQ questionnaire, and items related to the riders' driving record (when driving a motorcycle).In addition, the questions about demographic variables were filled in too.The MRBQ and DBQ were translated into the Slovenian language to avoid any misunderstanding.The data were collected over the 5-week period in fall 2014, and this collection was carried out by the means of online surveys, together with a traditional questionnaire. Motorcyclists were also asked to answer the questions about their gender and age.The final sample comprised 88.8% males and 11.2% females.There were 32.4% of the participants aged between 50 and 59 years, 31.3% between 40 and 49 years, 22.0% between 30 and 39 years, 8.2% were aged over 59 years, and finally, 6.1% were aged between 20 and 29 years. Apparently, the middle-age generation represented the majority in our sample.When the survey was finished, 182 fully completed MRBQ and DBQ questionnaires were received, which were afterwards included in further research. In the following two sections, the structure of both questionnaires and the descriptive statistical proper- Lajunen et al., 2004): Data collected in the MRBQ questionnaire The MRBQ consists of 43 items related to the safe/ dangerous behaviour of the motorists (see Table 1).Each item explains a specific behaviour characteristic that could be attributed to a motorcycle rider.The frequency of committing the event described by the item was expressed by using the 5-point Likert scale from "Never" to "Nearly all the time".It is true that most of research in the field, including the original MRBQ work [13], applied the seven or six-point scale for the indicators.However, in our case, due to local characteristics, it was decided to use the scale with the five points only.As can be seen from Table 1, the range of skewness index (SI) for indicators was (-1.826, 4.276), while the range of kurtosis index (KI) was (-1.489, 7.954).According to several authors [22,26,27,28,29,30,31], the normality of the given indicator data was not severely violated, but only slightly.Therefore, it can be said that these ranges do not represent a serious non-normality problem. Data collected in the DBQ questionnaire In our study, as mentioned before, we decided to use the 27-item questionnaire presented by Lajunen et al. [17].These authors investigated a four-factor based model structure, where the indicator variables were assigned to the following factor categories: Aggressive violations, "Ordinary violations", Errors, and Lapses.The indicator items and their properties are given in Table 2. Participants were questioned to estimate how often they cause any of the specified violations and errors when they are driving.Their answers were recorded on a five-point Likert scale, ranged from "Never" to "Nearly all the time". As can be noticed from Table 2, the range of skewness index for items was (-0.223, 4.050), while the range of kurtosis index was (-0.945, 6.997).Thus, as with MRBQ, it can be supposed that the normality of the given item data was not severely violated, but only slightly. Methods for analysis and model development Figure 2 shows the block diagram of the methods used in the analysis of our research [24].In the first stage, the exploratory factor analysis was applied.The EFA is often used as a preliminary statistical technique for identification of the latent factors and estimation of their indicator loadings.This way, the relationships between the observed indicator variables and the corresponding factors can be investigated. In the next stage, the confirmatory factor analysis was engaged, which examines how well the presumed theoretical structure of the factor model fits the real data.This way, the confirmatory test of our measurement theory is executed, which gives us the measurement part of the SEM model as the final result. In the final stage, the structural part of the SEM model is derived via the SEM modelling procedure, so the causal relations between factors are also identified.The design of the overall SEM model is completed when the validation tests and goodness of fit (GOF) measures give adequate results. Used estimators During the estimation of parameters in CFA and SEM procedure, the maximum likelihood (ML) method was conducted, since the ordinal indicator variables were only slightly non-normal.The excuse to apply this commonly used estimator is based on the findings of several previously introduced studies in the scholarly literature.In these studies, it is reported that the ML estimator gives the satisfactorily accurate estimated parameters, if the ordinal indicators contain at least five levels and are nearly normal [23,32,33,34].This can be particularly justified since the χ 2 statistic and GOF indices are not significantly false in such case [23]. Exploratory factor analysis The main goal while doing the EFA analysis was to extract only two factors, one related to the MRBQ, and the other related to the DBQ.The correctness of conducting the factor analysis was inspected by means of two tests: Bartlett's test of sphericity, and the Kaiser-Meyer-Olkin KMO test [22,23,24].The val-ue of the Bartlett's test was very large (χ 2 = 1,878.168with df = 378 and p < 0.001) while the KMO value was 0.828 > 0.5.Based on the recommendation from Frohlich and Westbrook [35], Sahin et al. [36], and Li et al. [37], the factor analysis can be reliably used in further research.While processing the extraction of factors, the principal axis factoring (PAF) method, with additional Promax rotation (and Kaiser Normalization) was employed. The PAF method analyses only common factor variability and removes unexplained variability from the factor model.It is based on searching of the lowest number of factors that can describe the common variance or correlation of a set of variables.After the completion of this method, only those items were retained which were significantly loaded on MRBQ factor or DBQ factor (which means: loadings λ ij > 0.40, according to [24]). The results of the rotated factor pattern matrix (factor loadings λ ij , Cronbach's alpha coefficients, and % of the total variance explained) are presented in Table 3.The calculation of Cronbach's alphas' statistics is a commonly used procedure while doing the factor analysis.Alphas represent the estimated values of reliability or internal consistency of a statistical instrument.They measure how well a set of observed items represents the corresponding unmeasured construct (factor).They also evaluate whether the items have an adequate internal consistency, i.e. whether they are strongly correlated and truly measure the same construct [22,32]. In our case, the Cronbach's alphas are both bigger than 0.7, which implies that reliability and internal consistency are, according to the recommendation of Hair and his colleagues, adequate [24].Cumulative percent of the total variance explained is low since many ill-fitting items were dropped from further analysis.The reason is that their communalities and/or loadings on factors MRBQ and DBQ were not adequate.This was expected since only two factors were applied in the analysis due to our specific research goals.Naturally, in a different configuration with more than two factors, the achieved results would differ from the one presented. Confirmatory factor analysis While executing the CFA analysis, the results of the earlier EFA analysis were considered as a baseline (see Figure 2).After the completed estimation of the parameters of our 2-factor model, the majority of estimated factor loadings remained quite similar as in the EFA case (see Table 3), and their range was consistent with the recommendations in literature [24]. In the sequel, the convergent validity was also inspected through the computation of composite reliability (CR) and the adequate average variance extracted (AVE).According to the authors of [24], the thresholds of CR and AVE values are 0.70 and 0.5, respectively.The CR values and the AVE values have been calculated for both factors, MRBQ and DBQ, and they appeared to be greater than the prescribed threshold values, which means: CR MRBQ = 0.721; CR DBQ = 0.894; AVE MRBQ = 0.512; AVE DBQ = 0.638. Structural equation model Considering the block diagram in Figure 2, the structural equation modelling procedure was the next and the last stage in the SEM modelling process.Herein, the structural part of the SEM model was also derived.Since this part is inseparably connected with the measurement part derived in the CFA, the composite of both parts had provided the overall structure of the SEM model.When the estimation procedure was completed, practically equal results for the estimated factor loadings occurred, as they were before calculated in the CFA analysis. The goodness of fit of the derived SEM model was, similarly as with the CFA, inspected through the computation of several model fit indices, which are recommended in the scholarly literature.Here again, as in CFA case, these indices provided evidence of a good mod- 3) are also shown in Figure 3. Since the causal path from factor MRBQ to factor DBQ is statistically significant with regression weight 0.59, our main hypothesis H 1 introduced in Figure 1 is obviously confirmed.Thus, it can be concluded that in our study the factor MRBQ indeed has a certain positively directed impact on the factor DBQ. From this, it might be also supposed that there is some truth in the subjective opinion of the target population, who claimed that they actually behave in a different way when driving a motorcycle or a car. DISCUSSION The results presented in the previous section revealed that there indeed exists a certain difference in the driving behaviour when the drivers' population drive a car or a motorcycle.The major reason is probably in their psychological awareness that the risk-taking when driving a motorcycle can cause much more tragic consequences than when driving a car.Thus, they are obviously familiar with the statistical facts that the chance of survival in severe accidents is much lower for the riders if compared to the drivers.Additionally, they probably subconsciously feel safer surrounded by the car "armour" than when they have no physical protection as a motorcyclist.So, they drive safer when using a motorbike, particularly in the sense of no alcohol drinking, avoidance of control and traffic errors, and rigorous use of the safety equipment. All this was also subjectively confirmed by the population drivers.As they said, the only exception is about speed violations, which sometimes occur due to the lack of objectivity about the actual speed.Also, they admit that while driving a bike, they feel more courageous in the case of seemingly non-dangerous situations (flat road, etc.).However, on the other side, while driving a car, they claim that they behave more nonchalant, superficial, and routinely. The findings of this study thus imply that the behaviour of the road users should be perhaps treated differently and more sensibly from a traffic regulations' point of view if the person is a rider or a driver.The Slovenian traffic laws, namely, determine that the person is disciplined by losing all driving licenses in case of serious violation.So, if they are punished as a driver, then they are penalized by the cost of the motorbike license as well.But, is this fair, since they presumably drive safer as motorcyclists?Naturally, this law specifics are characteristic only for Slovenia, and most likely the punishment is not so strict in other countries. In the future work, our attention will be focused on extending the research to some other countries since now the findings have only a local character.Maybe in other countries the road users, who are drivers as well as riders, behave completely differently.Perhaps they are not so cautious when driving a motorcycle and are not so aware of all tragic implications which can occur with too risky behaviour.Thus, when further, internationally based study is applied, our findings will be hopefully more generalized and will reveal possible cultural differences in conclusions about driving behaviour as a rider or a car driver. CONCLUSION The present study has dealt with the investigation of possible differences in drivers' behaviour when they drive a motorcycle or a car.For this purpose, the MRBQ and DBQ questionnaires were applied in a survey among the same population of interviewed motorcyclists. On the basis of collected survey data, the structural equation model with two factors, MRBQ factor and DBQ factor, has been developed.The SEM model has revealed positively directed, significantly weighted causal path from the MRBQ factor to the DBQ factor.So the model's performance statistically confirmed our main hypothesis, which implies that the motorcyclists behave differently when they are driving a motorbike or a car.This finding might be a serious warning for the legislature to rethink about the fairness of existing laws, which penalize the offender by confiscation of all driving licenses in case of serious violation.We think that such a rigorous measure is not proportionate to the seriousness of the particular offense, especially with non-fatal events or accidents with minor damage. It is believed that the main conclusions of this research might represent a significant contribution of this paper.In addition, since there are practically nocomparable papers detected which could report a similar type of research, the results of this study could help to fill the gap in the existing literature on the topic. Figure 1 - Figure 1 -Conceptual framework and the main hypothesis H 1 (an assumption of a different driving behaviour as a rider or a car driver) Figure 3 - Figure 3 -The standardized estimated SEM model Figure 3 , which corresponds to the conceptual framework in Figure1, illustrates the standardized estimated SEM model with the estimates significant at p ≤ 0.05 level.Besides the addressed factors MRBQ and DBQ, their retained items (introduced in Table Table 1 - Mean, standard deviation (SD), skewness index (SI) and kurtosis index (KI) of the MRBQ indicators 12212.Find that you have difficulty controlling the bike when riding at speed (e.g. Table 2 - Mean, standard deviation (SD), skewness index (SI) and kurtosis index (KI) of the DBQ indicators
2017-05-03T12:15:09.783Z
2015-12-17T00:00:00.000
{ "year": 2015, "sha1": "3e1e20adbb5858358f6a6faf314a7cbe1d535f0f", "oa_license": "CCBY", "oa_url": "https://traffic.fpz.hr/index.php/PROMTT/article/download/1816/1381", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3e1e20adbb5858358f6a6faf314a7cbe1d535f0f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Engineering" ] }
59223105
pes2o/s2orc
v3-fos-license
High Quality ATAC-Seq Data Recovered from Cryopreserved Breast Cell Lines and Tissue DNA accessibility to transcription regulators varies between cells and modulates gene expression patterns. Several “open” chromatin profiling methods that provide valuable insight into the activity of these regulatory regions have been developed. However, their application to clinical samples has been limited despite the discovery that the Analysis of Transposase-Accessible Chromatin followed by sequencing (ATAC-seq) method can be performed using fewer cells than other techniques. Obtaining fresh rather than stored samples and a lack of adequate optimization and quality controls are major barriers to ATAC’s clinical implementation. Here, we describe an optimized ATAC protocol in which we varied nuclear preparation conditions and transposase concentrations and applied rigorous quality control measures before testing fresh, flash frozen, and cryopreserved breast cells and tissue. We obtained high quality data from small cell number. Furthermore, the genomic distribution of sequencing reads, their enrichment at transcription start sites, and transcription factor footprint analyses were similar between cryopreserved and fresh samples. This updated method is applicable to clinical samples, including cells from fine needle aspiration and tissues obtained via core needle biopsy or surgery. Chromatin accessibility analysis using patient samples will greatly expand the range of translational research and personalized medicine by identification of clinically-relevant epigenetic features. slowly frozen human embryonic stem cells 15 and hematopoietic B cells 16 , and a modified protocol was developed for flash-frozen human thyroid cancer and brain tissues (Omni-ATAC) 17 . However, previous studies did not provide adequate data to assist researchers who wish to optimize the ATAC-seq protocol for other samples, including a comparison between fresh and frozen stored cells or tissues, which is critical information needed to optimize this method for other clinical applications. In this study, we optimized the ATAC-seq method for fresh and cryopreserved breast cancer cells and mouse mammary tissues. We found that high quality genome-wide open chromatin landscape data that is comparable to that produced using living cells can be generated from a small number of cells, as well as from small tissue samples stored by cryopreservation. Adapting ATAC-seq to small, stored clinical samples will greatly expand the reach of translational research and allow researchers to fully characterize the link between chromatin landscape changes and disease. This approach may also be applicable for personalized medicine therapeutic strategies employed by clinicians treating breast cancer and other diseases. Results ATAC protocol optimization for human breast cancer cell lines. The ATAC protocol is reported to require optimization for each cell type and tissue 18,19 . The first step of ATAC procedure is nuclear preparation from samples. Thus, we firstly compared three previously published nuclear preparation step using different cell lysis buffers from 50,000 live MCF7 cells. We assessed data quality using the original ATAC protocol lysis buffer (0.1% NP40) 12 , the Omni-ATAC protocol lysis buffer (0.1% NP40, 0.1% Tween-20 and 0.01% Digitonin) 17 , and the Takaku-ATAC protocol (0.1% Triton X-100) 18 . Nuclei preparation efficiency was estimated by trypan blue exclusion and 20%, 45%, and 99% of cells were found to be permeabilized using the original, Omni, and Takaku protocols, respectively. However, the Omni protocol nuclei yield reached 90% upon the addition of 0.1%Tween-20 and 0.01% Digitonin during the following transposase reaction step. Similar sequencing data were obtained by the Takaku and Omni protocols, including total tag count (Fig. 1A) and peak distributions, also referred as genomic distribution hot spots ( Supplementary Fig. 1). In contrast, the original ATAC-seq method produced a relatively small number of hotspots (Fig. 1A,B, Supplementary Fig. 1B), with a significantly higher fraction of hot spots located in promoters ( Supplementary Fig. 1A). To evaluate the signal-to-noise ratio and to compare the efficiency of the transposase reaction, we used the Transcription Start Site enrichment score (TSS score), as recommended by ENCODE, and the Percent Reference peak Coverage (PRC) metric developed in our laboratory to ensure compatible levels of digestion (Fig. 1B). PRC is a rate of hotspots overlapping with common open chromatin sites of the species and can evaluate hotspots distant from TSS. Both TSS and PRC scores were similar between the Takaku and Omni ATAC-seq protocols, but significantly better than the data obtained using the original ATAC protocol. Although, the Takaku protocol resulted in a higher proportion of mitochondrial reads, it also had the best TSS read counts (Fig. 1C). Thus, we conclude that inefficient nuclear preparation obtained using the original ATAC protocol resulted in lower number of peaks, lower PRC, and a higher promoter-localized fraction. In addition, the Takaku protocol is simpler and has fewer manipulation steps, which will avoid DNA loss steps and thus beneficial for clinical laboratories handling small samples. The nuclear preparation step is followed by tagmentation using transposase. The optimal transposase concentration is essential for high quality of ATAC-seq data. Thus, we further optimized the Takaku protocol using fresh MCF7 cells by increasing the transposase concentration from 2X to 4X compared to previously published ATAC protocol 12 , keeping the total volume of the reaction constant at 50 µl. The higher transposase concentration improved genome coverage, produced higher total hot spot numbers, and a lower signal-to-noise ratio without affecting the fraction of hot spots in promoters ( Supplementary Fig. 2). Hereafter, we refer to the optimized protocol as OPTI-ATAC. We confirmed that OPTI-ATAC was also suitable and significantly better than the original ATAC-seq protocol when applied to other breast cancer cell lines (T47D and ZR75-1, Supplementary Fig. 3). We then tested whether the OPTI-ATAC protocol can be applied to smaller cell number samples (10,000, 25,000 cells). The total reaction volume for 10,000 cells was decreased to 20 µL and the transposase concentration was increased to 10X. Under these conditions, OPTI-ATAC performed using 10,000 cells produced a similar percentage of uniquely mapped reads and was almost identical to OPTI-ATAC using 50,000 cells across number of hotspots, PRC, TSS score and rate of mitochondrial reads (Supplementary Fig. 4 and Table 1). ATAC protocol optimization for frozen cells. The lack of suitable methods for chromatin landscape analysis in stored samples has impaired its application to clinical samples, which are routinely fixed in formalin or flash frozen. Although formalin-fixed samples digested with DNase I are reported to provide some useful information 20 , the signal-to-noise ratio is low and the quality of the data is not suitable for genome-wide analysis. Unfortunately, frozen samples processed using the OPTI-ATAC protocol showed distorted nuclei and did not resulted in high quality data (<50% hot spot count compared to fresh cells) ( Fig. 2A,B). Thus, we tested other ATAC-seq protocols using MCF7 cells flash frozen in liquid nitrogen. Previous published studies suggested that the nuclear preparation step is not always necessary for open chromatin region analysis using flash frozen samples 17,21 . Thus, we performed the Omni-ATAC protocol with and without a nuclear preparation step (+/− NP). More peaks, a higher PRC value, and a higher signal-to-noise ratio was observed when the NP step was included (Fig. 2C). These data indicate that a nuclear preparation step is required for ATAC performed on flash frozen MCF7 cells. When comparing Omni-ATAC + NP with OPTI-ATAC-NP, we observed that they generated similar hot spot numbers and distribution, although Omni-ATAC + NP had a higher signal-to-noise ratio and enrichment of tag counts at TSSs ± 5 kb ( Fig. 2C-E). Omni-ATAC + NP may be the best protocol for flash frozen MCF7 cells, but the data quality is much lower than fresh cells (Fig. 2E, Table 1). We then tested whether freezing cells via a different method could improve the ATAC-seq data quality. We stored cells by slowly freezing them in DMEM supplemented with 10% Dimethyl sulfoxide (DMSO) and 50% serum, which should protect against ice crystal formation and subsequent cell damage, hereafter referred to as cryopreservation. This approach produced excellent results and high-quality data comparable to fresh cells (Fig. 3). OPTI-ATAC using 10,000 cryopreserved cells produced similar total hot spot counts to the results obtained from 50,000 fresh cells, and to DNase-seq data from the ENCODE database, which was performed on millions of fresh MCF7 cells (ibid). Furthermore, similar genomic distribution and signal-to-noise ratio were observed (Supplementary Fig. 5A and Table 1). Next, we performed digital footprinting analysis and detected footprints for E2F and Nuclear Respiratory Factor 1 (NRF1), key transcription factors associated with breast cancer progression and response to therapies [22][23][24] . High quality footprint data was generated from 10,000 cryopreserved and 50,000 fresh cells (Fig. 3E). An example UCSC genome browser screenshot from the ESR1 gene locus is presented in Fig. 3F to confirm the open chromatin pattern using OPTI-ATAC protocol on 10,000 and 50,000 fresh as well as 10,000 cryopreserved cells. OPTI-ATAC and Omni-ATAC protocols using 50,000 flash frozen cells produced tracks with smaller peaks and higher background. We further optimized the OPTI-ATAC protocol by increasing the concentration of transposase from 2X to 4X and 10X. Higher transposase concentrations significantly improved total hot spot numbers and signal-to-noise ratios. Furthermore, we obtained similar hot spot genomic distributions compared to ENCODE DNase-seq data (Supplementary Fig. 5A-C). When comparing results obtained using OPTI-ATAC and Omni-ATAC on 10,000 cryopreserved MCF7 cells, we determined that they generated a similar hot spot distribution ( Supplementary Fig. 5D). However, OPTI-ATAC had a higher TSS score ( Supplementary Fig. 5E). As OPTI-ATAC is a simpler procedure that minimizes DNA sample loss and is adaptable to small cell numbers, we concluded that the OPTI-ATAC protocol is ideal for generating high quality of data from as few as 10,000 cryopreserved human breast cancer cells. We also confirmed that OPTI-ATAC was suitable for other cryopreserved breast cancer cell lines, T47D and ZR75-1 ( Supplementary Fig. 6). Thus, the OPTI-ATAC protocol allows for high-quality genome-wide open chromatin site analysis using cryopreserved cells. Adaptation of the ATAC protocol for analysis of mouse mammary gland. Finally, we applied our experience optimizing ATAC for breast cancer cell lines to fresh, flash frozen, or cryopreserved tissue samples. Specifically, we used mouse mammary gland, with the ultimate aim to adapt this protocol to human normal and diseased breast tissues. Flash freezing is one of the current methods for sample storage in the clinic, and Omni-ATAC was previously reported to be suitable for flash frozen mouse and human tissues 17 . Therefore, we tested the Omni-ATAC protocol using flash frozen mouse mammary tissue and introduced several different detergent combinations and nuclear preparation steps ( Supplementary Fig. 7). Unfortunately, none of these protocols, including OPTI-ATAC, which we developed for cryopreserved human breast cancer cells, gave sufficiently high-quality data using 50,000 nuclei isolated from mouse mammary gland (Supplementary Fig. 8 and Supplementary Table II). As cryopreservation improved the ATAC data quality using breast cancer cells, we tested whether it could also improve the quality of the data generated using stored mouse mammary gland. Unfortunately, the OPTI-ATAC protocol using cryopreserved tissue did not significantly improve the results when compared to flash frozen tissue ( Supplementary Fig. 8A). We then compared the results of 50,000 nuclei prepared from fresh, flash frozen and cryopreserved tissue using the previously published Omni-ATAC protocol. This protocol produced a well-defined nucleosome ladder, indicative of DNA fragments originally protected by an integer number of nucleosomes, that was comparable between fresh and cryopreserved tissues, as contrasted to flash frozen tissue ( Supplementary Fig. 8B). Thus, we chose to perform the Omni-ATAC protocol on cryopreserved mouse mammary gland. Figure 4A,B show the results of Omni-ATAC protocol comparing 50,000 nuclei from fresh, flash frozen and cryopreserved tissues. Similar results were obtained from cryopreserved and fresh, including total hot spot count, distance from TSS, signal-to-noise ratio and genomic distribution (Fig. 4, Supplementary Table II). UCSC genome browser tracks near the mouse housekeeping gene locus, Tbp, confirm that cryopreserved tissue generated high quality data (Fig. 4C). We further optimized the Omni-ATAC protocol to produce higher quality data. We found that removing the detergents (Digitonin and Tween 20, -D/T20) during the transposase reaction did not improve overall quality of the data (Supplementary Fig. 9A-D). However, increasing the transposase concentration doubled the hot spot count, increased the signal-to-noise ratio, and improved the overall data quality (PRC > 90%, Supplementary We examined whether the data from cryopreserved tissue can be used to identify TF footprints in accessible chromatin regions. CCCTC-Binding Factor (CTCF) and E2F footprint depths were comparable in cryopreserved tissue between the 1X and 2X transposase concentrations, and was much improved compared to flash frozen tissue ( Supplementary Fig. 10B). Overall, we conclude that cryopreservation is a viable storage method for both cells and tissues for future identification and analysis of open chromatin sites. Discussion In this study, we demonstrated that ATAC-seq can generate high quality genome-wide chromatin landscape data from breast cancer cells and mammary tissue stored by cryopreservation that is comparable to the results obtained from fresh cells and tissue. Cryopreservation is an effective strategy for structural preservation of most mammalian cell types and is widely applied to cell banking, including umbilical cord-derived blood cells and embryonic cells used for assisted reproduction. DMSO is frequently used as a cryoprotectant to prevent intra-and extracellular ice crystal formation that damages cellular structures, including chromatin. However, rapid thawing followed by cryoprotectant dilution prevents mechanical damage. Using this strategy, we cryopreserved different breast cancer cell types and found that ATAC-seq performed using only 10,000 cells yielded high quality data comparable to an equal number or even 50,000 fresh cells. The small cell numbers used in our studies is comparable to analysis of one or two fine needle aspiration biopsies 25 . This is applicable to future translational research because it can be used to evaluate the chromatin accessibility in early stage tumors and intraoperative samples without damage to tumor margins. Cryopreservation of tissues is more difficult than other storage procedures because tissues are a mixture of various cell types and optimal cryopreservation strategies can vary between cell-types. We found that ATAC-seq can be performed on small samples of fresh tissue fragments and those frozen in DMSO-containing media, such as the commercially available BamBanker 26 . We confirmed that similarly cryopreserved human liver tissue in BamBanker generated high quality data (data not shown). However, OPTI-ATAC, which was best for human breast cancer cells, was not as successful as Omni-ATAC protocol for mouse mammary tissue. This could be because Omni-ATAC protocol requires multiple mild detergents compared to cultured cells, although longer incubation time is needed for tissue and nuclei preparation. count between data of 50,000 fresh MCF7 and 4X transposase and the data of 10,000 cryopreserved MCF7 and 10X transposase. (C) Venn diagram confirms a large overlap between 50,000 fresh MCF7 with 4X transposase (red) and 10,000 cryopreserved MCF7 with 10X transposase (blue), both comparable to the DHS-seq data from ENCODE database for MCF7 cells (yellow). (D) Histogram of enrichment at TSS ± 5 kb confirms similarity between results obtained by OPTI-ATAC from fresh (red) and cryopreserved (blue) samples. Omni-ATAC from flash frozen cells with nuclear preparation (green) is presented for comparison. (E) E2F and NRF1 footprint. Log ratio of observed versus expected tag count was adjusted by the baseline of each data set and plotted as distance of binding motif. Similar results were detected using 50,000 fresh (red) and 10,000 cryopreserved cells (blue). Omni-ATAC (dark green) and OPTI-ATAC data (light green) data from flash frozen cells is presented for comparison. The two vertical black lines indicate the boundaries of the motifs. (F) Sequence track surrounding ESR1 gene locus from UCSC browser on OPTI-ATAC from 10,000, 50,000 fresh, and 10,000 cryopreserved cells, and Omni-ATAC on 50,000 flash frozen cells. The 2 lower lanes show that flash frozen cells have lower signal and high noise above background. These data confirmed that cryopreservation can generate high quality data. Unfortunately, flash frozen cells or tissue samples generated low-quality data regardless of the ATAC-seq protocol, compared to fresh and cryopreserved samples. Furthermore, inappropriate ATAC and storage protocols detected a lower open chromatin site fraction in non-promoter regions. These results indicate that storage and ATAC protocol optimization is essential for enhancer region detection, which is cell specific but can characterize disease status. To select the best ATAC-seq protocol, we used several rigorous measurements of quality control. TSS score, recommended by the ENCODE consortium to estimate signal-to-noise ratio, is not sufficient to estimate the coverage of all potentially open chromatin sites in the genome in each assay. Thus, our laboratory developed a novel method for quality control, PRC, a powerful analysis of coverage depth for all potentially open hotspots in the genome, including those near TSS. Combining these two measurements provided excellent quality control for experiments performed on different days, and in comparing storage conditions, leading to selection of optimal ATAC protocol for cells and tissues. In conclusion, our study provides critical information that will help researchers to optimize clinically-applicable ATAC-seq protocols and use stored tissues for genome-wide chromatin landscape analysis, transcriptional factors footprint, and, ultimately, disease-specific enhancer characterization. Selecting the correct freezing method is critical when using stored material for chromatin landscape analysis and should be considered in the development of future clinical trials. Our results will greatly expand the range of future translational research. Adapting ATAC-seq for stored clinical samples will lead to identification of epigenetic features and development of novel clinical targets not only for breast cancer, but also for many other diseases. Methods Cell lines. MCF7, T47D and ZR75-1 cells were obtained from ATCC (Manassas, VA), and cultured in 5% CO 2 in a 37 °C incubator. MCF7 and T47D were maintained in Dulbecco's Modified Eagle Medium (DMEM) with 4.5 g/l of D-glucose, and supplemented with 10% FBS, 2 mM L-glutamine, 1 mM sodium pyruvate, and 1% penicillin/streptomycin. ZR75-1 cells were maintained in RPMI 1640 containing the same supplements. Cells were prepared using three different methods: Fresh cells. Cells were washed in phosphate-buffered saline (PBS), trypsinized, trypsin neutralized with culture medium, cells pelleted by centrifugation, and washed in PBS. Cells were counted, 50,000 cells were centrifuged at 3,000 rpm for 5 min at 4 °C and pellets were resuspended in cold cell lysis buffer followed by a nuclear preparation step 11,12 , as below. Cryopreservation of cells. Cells were trypsinized, trypsin neutralized with culture media, spun down and the cell pellets were resuspended in slow freezing media (10% DMSO, 50% FBS, with 40% DMEM or RPMI 1640), transferred to an isopropyl alcohol chamber (Thermo Fisher Science, Waltham, MA), frozen slowly (−1 °C/minute) at −80 °C and stored for more than one month. To thaw, the vials were warmed for approximately 2 min in a 37 °C water bath, mixed with PBS (1:1) and centrifuged at 3,000 rpm for 5 min at 4 °C. The supernatant was removed, cell pellets were resuspended in cold PBS and counted. 10,000-50,000 cells were transferred to individual tubes, centrifuged at 3,000 rpm for 5 min at 4 °C, resuspended in cold cell lysis buffer. Flash-frozen. The cells were washed with PBS, trypsinized, Trypsin neutralized in medium, counted, 50,000 cells were divided into individual vials and pelleted by centrifugation. The supernatant was removed, cells were flash-frozen in Eppendorf vials submerged in liquid nitrogen and stored in −80 °C for more than one month. The flash-frozen cell pellets were removed from −80 °C immediately and processed using the different ATAC protocols outlined below. Mouse mammary gland tissue. C57BL/6 J female mice (four to five months old) from the National Institutes of Health animal facility were sacrificed by CO 2 inhalation, mammary glands were dissected and processed as described below. All experiments were approved by the ACUC (Animal Care and Use Committee) of the National Cancer Institute, National Institutes of Health and all methods were performed according to relevant guidelines and regulations from ACUC. Fresh samples. Tissues were washed in ice-cold PBS containing protease inhibitors (Sigma, St. Louis. MO) and immediately processed for ATAC protocol described below. Cryopreserved samples. Tissues were washed in ice-cold PBS containing protease inhibitors, cut into 4-5 mm diameter pieces, and several fragments were slowly frozen in 1-1.5 ml freezing media as described above. To thaw, vials were warmed for 2 min in a 37 °C water bath, diluted in PBS containing protease inhibitors, and the supernatant removed. The samples were minced by a razor blade on ice, followed by ATAC protocol as below. Flash-frozen. Mammary glands were washed in PBS containing protease inhibitors, excess liquid removed on adsorbent tissue, cut into 4-5 mm size pieces, and three to five fragments were flash-frozen in liquid nitrogen and transferred to pre-chilled vials. The vials were stored in −80 °C for more than one month. The samples were removed from −80 °C, immediately minced by a razor blade on ice, and subjected to the ATAC protocol as below. Nuclear preparation of tissue. The Omni-ATAC method for mouse mammary tissue was modified based on Corces et al. 17 . Approximately 100 mg of mouse mammary tissue was minced into ~2 mm fragments by a razor blade on ice, placed into a pre-chilled 1 ml Dounce homogenizer containing 1 ml of cold homogenization buffer (320 mM sucrose, 0.1 mM EDTA, 5 mM CaCl 2 , 3 mM Mg(Ac) 2 , 10 mM Tris-HCl pH 7.8, proteinase inhibitors, and 167 μM β-mercaptoethanol). 0.1% NP40 was added during the Nuclear Preparation step (+NP), or no detergents if NP was omitted (−NP). We also used 0.1% TritonX-100 as described in the OPTI-ATAC protocol. Tissues were homogenized by 10 strokes using a loose pestle, followed by 20-30 strokes with the tight pestle. Tissue homogenate (400 μl) were transferred to pre-chilled new 2 ml Lo-Bind Eppendorf tube and mixed with equal volume of 50% iodixanol in homogenization buffer to obtain a final of 25% iodixanol. 600 μl of 29% iodixanol in homogenization buffer was layered underneath the buffer containing the samples, with 600 μl of 35% iodixanol in same buffer was introduced below the 29% iodixanol. Nuclei were centrifuged at 4,000 rpm for 20 minutes at 4 °C in a swinging-bucket centrifuge. The nuclei in the layer formed between 29% and 35% iodixanol were collected into a new tube and counted by staining with trypan blue. 50,000 nuclei were transferred into a pre-chilled new tube containing 1 ml ATAC-resuspension buffer with 0.1% Tween-20 for Omni-ATAC or without detergent for OPTI-ATAC, and centrifuged at 2,500 rpm for 10 minutes at 4 °C. Supernatant was removed and the nuclei pellets was resuspended in transposase reaction mix described below. Transposase reaction (chromatin tagmentation). Following The transposase reaction was carried out for 30 minutes at 37 °C, in a shaker at 1,000 rpm. The samples were purified using MinElute PCR purification kit (Qiagen, Frederick, MD) and eluted into 10 μl of Elution Buffer. Samples were PCR-amplified using 1X NEBNext High-Fidelity PCR Master Mix (New England Biolabs, MA), 1.25 μM of custom Nextera PCR primers as described 11 , using the following PCR protocol: 5 min at 72 °C, 30 sec at 98 °C, followed by thermocycling (10 sec. 98 °C, 30 sec. 63 °C, 1 min 72 °C). After five amplification cycles, 5 μl aliquot was removed and added to 10 μl of the PCR mixture with SYBR Green I (Invitrogen, Carlsbad, CA) and amplified for 20 cycles to determine the number of additional cycles based on the cycle number that corresponds to a third of the maximum fluorescent intensity. Libraries were amplified for a total of 7-10 cycles. Subsequent sample purification and size selection (150-1000 bp) were performed using SPRI select beads (Beckman Coulter, Indianapolis, IN). Fragmented DNA was eluted in 25 µl 10 mM Tris-HCl, pH 8.0. The quality of the tagmented libraries was visualized by Agilent D1000 ScreenTape on 2200 TapeStation system (Agilent Technologies, Savage, DE). Sequencing and data processing. The samples were subjected to paired-end sequencing using 2 × 75-bp reads using the Illumina NextSeq High V2 at the National Cancer Institute Sequencing Facility, (Frederick, MD). The reads were trimmed in silico to remove adapter sequences, low-quality reads, and 50 bp length using Trimmomatic 0.30 software. The reads were aligned to human (hg19) or mouse (mm9) reference genome using Bowtie2 alignment tool. Mitochondrial reads were filtered for the subsequent analyses. DNase-seq data of MCF7 cells from GEO accession # GSE32970 was downloaded from the UCSC ENCODE database. Peak calling and replicate concordancy. All peak calling was performed using MACS2 v.2.1.1 27 with callpeak-format BAMPE parameters for paired ended reads and callpeak-nomodel-shift-75-extsize 150 for single-ended reads. For peak-calling of ATAC-seq data, all forward strand reads were offset by +4 bp and all reverse strand reads were offset by -5 bp to represent the center of the transposon binding event 11 . For the replicates, we obtained the merged peak sets from pooled data as well as the sets of peaks from each individual replicate. We retained those peaks (referred in the text as hotspots) from pooled data that have at least 50% overlap in each replicate. Scatter plots, Venn diagrams, and Heatmaps. Scatter plots were generated to show the change in maximum tag densities in the sites between two replicates or conditions. Pearson correlations were calculated using R. For heatmaps, the total number of sequence reads under the 10 K base pair regions around the center of peaks has been extracted and normalized for the total number of reads in the sample (reads under the peak/10 million total reads) using annotatePeak.pl of the Homer Software Suite V. 4.10 28 . The heatmaps were generated using an in-house R script. For histograms, the data is presented as average read depth at each position in the surrounding 5,000 bp centered on RefSeq transcription start sites (TSS) determined using Homer (annotatePeaks.pl tss). The total tag number was normalized to 10 million reads. Area proportional Venn diagrams were generated to demonstrate the numbers of hot spots shared by two or three conditions using the Venneuler R statistical software package. The number of overlapped or unique peaks was determined using the software BEDtools suite v2.27.0 29 . Transcription Start Site (TSS) Enrichment score. TSS enrichment score is the ENCODE-recommended parameter to evaluate signal-to-noise ratio of open chromatin for ATAC-seq. The score was calculated by counting transposition events in 1 bp bins in the regions ±2,000 bp around all TSSs in hg19 RefSeq for human cells or mm9 RefSeq for mouse tissue samples. Percent Reference peak Coverage (PRC). Percent Reference peak Coverage (PRC) is a rate of hotspots overlapping with common open chromatin sites of the respective species. It was developed to ensure compatible levels of digestion by tagmentation between experiments. To calibrate PRC, commonly represented hotspot sites were identified as reference peaks in human (hg19) and mouse (mm9) genomes. For human reference peaks, ENCODE narrowPeak definition files were downloaded from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/ encodeDCC/wgEncodeAwgDnaseUniform/. We collected 1,858 DNase I hypersensitivity sites consistently accessible (>97%) over 125 human cells available from ENCODE database which includes normal differentiated primary cells (n = 71), immortalized primary cells (n = 16), tumor-derived cell lines (n = 30), and multipotent and pluripotent progenitor cells (n = 8). For mouse reference peaks, ENCODE narrowPeak definition files (DNase I Hypersensitivity by Digital DNase I from ENCODE/University of Washington) from 133 samples were downloaded from the UCSC golden path web site: http://hgdownload.cse.ucsc.edu/goldenPath/mm9/encodeDCC/ wgEncodeUwDnase/. A total of 8,587 DNase I peaks were identified as common to all 133 samples. Identification of transcription factor recognition motif sites. Footprint analysis was performed for E2F and NRF1 for MCF7 cells and mouse mammary gland tissue. We downloaded the position weight matrices of each transcription factor from the JASPAR data base 30 . Candidate sites for each motif were identified using FIMO 31 (ver. 4.10.1) with p < 10 -4 to scan the human (GRCh37/hg19) and mouse (NCBI37/mm9) reference genomes. Footprint plot analysis. The footprint deviation was calculated by the log2 ratio between the aggregate observed counts and the aggregated expected counts from genome-wide sets of sites matching the corresponding motif. The depth was calculated as the mean of the log2 ratio over the candidate FIMO motif regions of the transcription factors which are also in the open chromatin regions. The observed count profiles were generated by taking the aggregation of the raw cut counts over the cognate motif element which is bound by the transcription factors within ATAC-seq peaks. The profiles were generated as previously described 32 by taking the average of DNA hexamer frequencies centered at each nucleotide position from the total raw cut counts in the samples 33 . To eliminate the sequencing bias due to non-uniquely mapped bases, we calculated hexamer frequencies using the obtained 3′ mappability information of k-mers using the mappability program available as part of the PeakSeq package 34 . To adjust for differences in the depth of sequencing between samples, read counts used in the calculation of both observed and expected counts were normalized to 100 million reads.
2019-01-25T15:11:43.593Z
2019-01-24T00:00:00.000
{ "year": 2019, "sha1": "b6800179a3ddf882e709ab28ea9a852d0172d583", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-36927-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6800179a3ddf882e709ab28ea9a852d0172d583", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118511719
pes2o/s2orc
v3-fos-license
First-principles study of excitonic effects in Raman intensities The ab initio prediction of Raman intensities for bulk solids usually relies on the hypothesis that the frequency of the incident laser light is much smaller than the band gap. However, when the photon frequency is a sizeable fraction of the energy gap, or higher, resonance effects appear. In the case of silicon, when excitonic effects are neglected, the response of the solid to light increases by nearly three orders of magnitude in the range of frequencies between the static limit and the gap. When excitonic effects are taken into account, an additional tenfold increase in the intensity is observed. We include these effects using a finite-difference scheme applied on the dielectric function obtained by solving the Bethe-Salpeter equation. Our results for the Raman susceptibility of silicon show stronger agreement with experimental data compared with previous theoretical studies. For the sampling of the Brillouin zone, a double-grid technique is proposed, resulting in a significant reduction in computational effort. I. INTRODUCTION Raman spectroscopy is widely used to characterize materials by means of their vibrational fingerprint. The dependence of the Raman intensity on the frequency of the incident light is well-known. It is, for example, used to amplify the Raman response, resulting in the appearance of a resonance phenomenon when the frequency of the exciting light is close to electronic transitions. 1 Unlike for molecules 2,3 and for graphene, 4,5 the first-principles prediction of the frequency dependence of the Raman intensity of crystalline systems has received little attention. 1 The Raman intensity is related to the derivative of the macroscopic dielectric function with respect to collective atomic displacements. Different first-principle formalisms have been proposed for the computation of such a dielectric function. These formalisms often trade computational speed for predictive power, or vice versa. In the present study, a method that provides an accurate description of the dielectric properties of material was chosen in order to establish the importance of different physical effects, and in particular, excitonic effects. Within the static limit (vanishing light frequency), the dielectric response can be computed with Density-Functional Theory (DFT) 6-8 followed by Density-Functional Perturbation Theory (DFPT). [9][10][11] Although DFT is plagued by the well-known band gap problem, 8 its prediction of the static dielectric tensor is reasonably accurate (to within 5-10%) except when the gap is very small. 12 Subsequent computation of the derivative of the dielectric tensor with respect to an atomic displacement can be performed by using either finite differences 13 or the 2n + 1 theorem of perturbation theory. 14, 15 Such methodology has been applied in numerous studies. 16,17 As an example, more than two hundred Raman spectra are provided in the WURM database. 18,19 When the excitation frequency is comparable to the gap, DFT becomes unreliable for the prediction of the dielectric response. Not only does the proximity of the resonance increase the need to rely on an accurate band gap, but excitonic effects also drastically modify the optical properties of most semiconductors. 20 The band-gap correction is usually treated within the GW approximation of Many-Body Perturbation Theory (MBPT), while the Bethe-Salpeter Equation (BSE) is the method of choice to introduce excitonic effects. 21,22 To our knowledge, the BSE has not yet been used to compute Raman intensities of solids. The purpose of the present work is to compute the Raman intensities, using a finite difference approach that combines multiple BSE results performed for different atomic displacements. Excitonic effects can also be addressed within the framework of time-dependent density functional theory. [22][23][24][25] This approach, which is computationally cheaper, also allows one to include excitonic effects, with an accuracy that depends on the choice of the exchange-correlation kernel. Recent studies shows interesting agreement with experiment for the macroscopic dielectric function (see e.g. Ref. 26). This route is not pursued in the present study. Instead we rely on the best theoretical approach available today to compute the frequency-dependent Raman intensities, and examine its predictive power in comparison with experimental data. We chose to study silicon, for which the experimental frequency-dependent enhancement factor is particularly strong. The available data cover the frequency range between 1.8 eV and 3.8 eV, 27 and the experimental value of the direct gap is at 3.4 eV. Due to the high symmetry of silicon, there is only one Raman-active phonon mode, whose eigenvector is determined by symmetry. In Sec. II of this article, the theoretical basis needed for the computation of the resonant Raman intensities is described, taking into consideration the main equations of the MBPT in the GW and BSE frameworks. Sec. III describes the numerical procedure. In Sec. IV, the problem associated with the slow convergence of results with respect to the sampling of the Brillouin zone is analyzed. Sec. V presents the theoretical results, including excitonic effects. Finally, in Sec. VI, theoretical and experimental results for the silicon Raman intensity are compared. II. THE COMPUTATION OF RESONANT RAMAN INTENSITIES FOR SOLIDS The scattering efficiency (time-average of the power radiated into unit solid angle) of the phonon of frequency ω m for a photon of frequency ω i is defined as: 15 with e o and e i the outgoing and ingoing polarization of the light, n m the phonon occupation factor: The complete field-theoretic expression for the Raman susceptibility α m (ω) is presented in Ref 20. It includes six terms, in which the frequencies ω m and ω i are combined in different denominators, giving resonant as well as anti-resonant contributions. In the following calculations, we will use the quasi-static approximation which neglects the dynamical effects due to the phonons. Mathematically, this approximation is well-justified 28 when: with ω gap the frequency corresponding to the direct band gap and η the lifetime broadening of the gap. In this framework, the Raman susceptibility α m (ω) for the phonon m is defined as: with Ω 0 the unit cell volume, χ ij the macroscopic dielectric susceptibility and u m τ β the eigendisplacement of phonon mode m of atom τ in direction β. In the present work, we neglect higher-order derivatives with respect to atomic displacements. The eigendisplacements are normalized as: with M τ the mass of atom τ . 29 For more details about the derivation of Eq. We define the Raman polarizability a by: with µ the reduced mass (in the case of silicon, µ = M Si /2). When the incoming frequency ω i is close to the energy of the direct gap, there is a resonant process and the amplitude of χ(ω i ) and α m (ω i ) can change by several orders of magnitude. The computation of the macroscopic dielectric susceptibility χ(ω i ) follows the standard procedure used in ab initio MBPT. Two steps are needed: the computation of the quasiparticle energies, followed by the computation of the dielectric response of the material. The quasiparticle amplitudes ψ QP i and the quasiparticle energies ǫ QP i are computed by solving the following equation: with V ext (r) the electrostatic potential of the ions and V H (r) the Hartree potential originating from the electronic density n(r). In Eq. (7), Σ(r, r ′ ; ω) is the selfenergy, that, in the so-called GW approximation, is given by: where G is the Green's function and W the screened Coulomb interaction. 22 In the second part of the calculation, we include excitonic effects by working within the BSE framework. 22 In this framework, we introduce H, a two-particle hamiltonian that describes the interaction between electrons and holes. In the transition space, formed by products of two Kohn-Sham orbitals, the BSE hamiltonian has the following block structure: where v, c and k denotes the valence band index, the conduction band index and the wavevector. The resonant sub-block R is Hermitian, and the coupling term C is symmetric. Due to the coupling subblocks that connect resonant and anti-resonant transitions, the Bethe-Salpeter Hamiltonian is not Hermitian. This complicates the solution of the problem. In crystalline systems, however, the matrix elements of C are usually much smaller than the matrix elements of R. For this reason, the matrix elements of C are usually neglected when solving the Bethe-Salpeter problem in extended systems -the so called Tamm-Dancoff approximation (TDA). 30 This approximation is used in all the rest of this work. The matrix elements of the resonant block are given by: withv the modified Coulomb potential, whose Fourier transform does not contain the q = 0 component: v(r) the standard Coulomb potential: W (r, r ′ ) the screened Coulomb potential: and ǫ −1 (r, r ′ ) the inverse dielectric function. For the derivation of Eqs. (11) to (16), we refer to Ref. 22. The dielectric susceptibility χ(ω) and macroscopic dielectric function ε(ω) are then obtained from: where η is a broadening factor, F is taking into account the occupation numbers: and are the so-called oscillator matrix elements where n 1 and n 2 are a short-hand notation for vck. The Random-Phase approximation (RPA) is a simplification of the BSE approach, in which the exchange 31 and Coulomb terms, Eqs. (12) and (13), are neglected. In the RPA, the BSE Hamiltonian H is diagonal, and the spectrum is obtained directly as a simple sum over transitions between valence and conduction bands, weighted by the proper oscillator matrix elements. In such an independent-particle approximation, no excitonic effect is present. The importance of the excitonic effect on the optical spectrum is well-known, with prominent peaks being created below the band gaps in most wide-gap insulators or semiconductors, and redistribution of the spectral weight. III. NUMERICAL PROCEDURE Calculations are performed using ABINIT. 32,33 The pseudopotential used to simulate the silicon atom is of the Troullier-Martins type used in the Teter parametrization. The DFT-LDA calculations are performed with a 4 times shifted 4x4x4 Monkhorst-Pack grid to sample the Brillouin Zone (BZ), 34 and a plane-wave basis set kinetic energy cut-off of 16 Ha. The theoretical lattice cell for silicon is 10.20 Bohr, which gives an error of 0.6 % with respect to the experimental results (10.26 Bohr). 35 Using this theoretical lattice constant, the DFT-LDA indirect gap is 0.45 eV, while the direct gap is 2.52 eV. Quasi-particle corrections are computed within the socalled one-shot GW or G 0 W 0 approximation. 21 We use a cut-off energy of 8 Ha for the screening and 16 Ha for the self-energy matrix elements. An extrapolar energy of 3 Ha is used to reduce the number of bands needed to converge to 100 bands. 36 The computed GW corrections give a direct gap of 3.20 eV. These results are comparable to other GW results 37,38 and in good agreement with the experimental band gap of 3.4 eV. 39 During the computation of the BSE optical spectrum, the opening of the gap is simulated by a rigid scissor 40 with a value of 0.65 eV to reproduce the theoretical GW gap unless stated otherwise. Convergence of the Bethe-Salpeter computations with respect to the BZ sampling is particularly difficult, and is discussed in the next section. The cut-off energies are 16 Ha for the wavefunctions and 3 Ha for the screening. The included bands range from the second to the ninth band. A broadening factor of 0.1 eV is used for the dielectric function. The quasi-static approximation that is used extensively in this work is justified in the case of silicon since the lifetime broadening (≈ 0.1 eV) is larger than the phonon frequency ≈ 0.065 eV. 28 In this quasi-static approximation, two-band as well as three-band contributions to the resonant Raman are included, the latter coming from the matrix element changes due to changes in the wavefunctions produced by phonon-induced admixture of the two bands under consideration with a third band. 41 In order to compute derivatives with the displacements, we add h × √ 2/2 to the x-position of the atom and −h × √ 2/2 to the x-position of the other atom, for different values of h. The derivative is obtained by computing χ yz (ω) for h = 0.01 and h = 0 in the convergence studies and is obtained by computing χ yz (ω) for h = 0.01 and h = −0.01 for the final result. We have analyzed the behaviour of the G 0 W 0 scissor shift, as a function of the frozen phonon amplitude. Because the Raman amplitude corresponds to a firstorder derivative with respect to atomic displacement, see Eq. (4), we only have to consider the linear response. For non-degenerate eigenstates, due to the high symmetry of the crystals, such derivative of the scissor shift with respect to atomic positions vanish. For degenerate eigenenergies, linear variations of eigenenergies are present, but the mean variation vanishes over the set of degenerate states. Hence, the modification of the G 0 W 0 corrections with respect to atomic displacement does not have any effect on the Raman intensity of silicon, within the present formalism. Of course, this is a very specific situation. For the analysis of most other materials, the variation of the eigenenergies at linear order will have to be taken into account. As implemented in ABINIT, the BSE gives the macroscopic dielectric function for a given q-direction: whereε is the dielectric tensor. The macroscopic dielectric function (Eq. (17)) is computed using the iterative Haydock technique. 42 The al-gorithm is terminated when a relative error of 1%, for the real and the imaginary part of ε, is achieved for each frequency in the frequency range under investigation. IV. SAMPLING OF THE BRILLOUIN ZONE As mentioned previously, achieving converged results with respect to the sampling of the Brillouin zone is a very difficult issue. In particular, we will show in this section that grids that are appropriate for obtaining a converged macroscopic dielectric function in the whole frequency range are not sufficiently dense for derivatives, such as the Raman intensities, at least in the resonance region. In order to accelerate the convergence of BSE spectra, shifted homogeneous meshes are traditionally used. A symmetry-breaking shift allows one to sample more non-equivalent points and therefore leads to a more representative sampling of the band dispersion: with respect to non-shifted meshes, its presence lowers the computational load for an equivalent convergence criterion. To ease the discussion, we introduce specific notations. The meshes are characterized by the number of divisions along each axis of the primitive cell in reciprocal space, namely n 1 , n 2 and n 3 . For a crystalline cubic structure these three numbers are taken as equal (n 1 = n 2 = n 3 = n k ). The total number of points inside this mesh is therefore N k = n 1 n 2 n 3 = n 3 k . All the points of the mesh are shifted by a certain vector characterized by three real numbers s i between -0.5 and 0.5, s = (s 1 , s 2 , s 3 ). This shift is such that the point (s 1 /n 1 , s 2 /n 2 , s 3 /n 3 ) belongs to the shifted mesh. We use the notation (n 1 × n 2 × n 3 |s) to refer to such a mesh. Fig. 1 presents the macroscopic dielectric function (from BSE) for different grids with increasing number of wavevectors, while Fig. 2 presents the corresponding Raman intensity, both with excitonic contributions (BSE), and without excitonic contributions (RPA). These grids, of size N k , are shifted by the vector s = (0.11, 0.21, 0.31) in reciprocal space along a non-symmetric direction. As seen in Fig. 1, for the computation of the macroscopic dielectric function, the oscillations present in the (10 × 10 × 10|s) case are progressively damped when the density of the mesh is increased and a (16 × 16 × 16|s) grid gives converged results. However, from Fig. 2, we observe that the convergence is much more difficult for the square of the Raman susceptibility, in the region beyond 3.2 eV (which corresponds to the optical gap). Important features still change going from (16×16×16|s) to (18×18×18|s). Interestingly, such a difficult convergence is present both with and without excitonic contributions, as examplified by the upper and lower parts of Fig. 2. With the method of shifted grids, convergence is not achievable, given our computational resources, beyond 3.2 eV. Indeed, the scaling of the method with respect to the sampling of the BZ is O(N 2 k ), with N k the total 1. (Color online). Convergence of ε(ω) (BSE) with respect to a traditional homogeneous sampling of the BZ. A shift s in a non-symmetric direction is used (see text). The number indicated is n k and the grid is therefore (n k × n k × n k |s). The imaginary part is given in blue color while the real part is in orange-red. The full line corresponds to the finest grid that we have used, with n k =18. Oscillations appear for energies larger than 3.2 eV, but are damped with increasing n k . number of k-points in the full BZ. With N k =18 3 , the convergence is not yet reached. By contrast, convergence is much better below the gap value. In the next paragraphs, we first perform an analysis of the convergence for such frequencies, then turn to the higher-frequency part of the spectrum, for which we have developed a double-grid technique. A. The convergence below the band gap In order to have a quantitative understanding of the convergence in the low-energy part, we use a Taylor expansion and give coefficients similar to the socalled Cauchy coefficients for the macroscopic dielectric function. 43 Since the function is even with respect to the frequency, we can expand the absolute value of the Raman tensor and the real part of the dielectric function with even powers of the frequency: The coefficients can be obtained by a least-square fitting of the finite difference results until 1.5 eV (see Tab. I). The results of the fit obtained with this technique are presented in Fig. 3. The range of validity of this fit goes beyond 2 eV. A fitting above 2 eV leads to an oscillatory behaviour in the very low energy range: the four-term expansion in Eq. (22) and (23) is not accurate enough to describe the higher-energy part. . The number indicated is n k and the grid is therefore (n k × n k × n k |s). The convergence is difficult to achieve for energies larger than 3.2 eV. Cauchy coefficients are already well-converged for the 14 3 grid (within a few percent for the first and second ones). However, such a fit does not correctly describe the resonance close to the gap energy. B. The convergence above the band gap As mentioned earlier, we analyze the convergence of the Raman intensities in both the BSE case ( Fig. 2 (a)) and in the RPA approximation ( Fig. 2 (b)). Both RPA and BSE present similar difficulties to converge the final results. Hence, we can conclude that the convergence issue is not primarily due to the building up of the excitons that arises from the off-diagonal couplings, but is already present at the independent-particle level. Why the convergence rate in the case of Raman susceptibility is smaller than in the macroscopic dielectric function (22) and (23)). The grids used are (n k × n k × n k |s). can be understood as follows. The imaginary part of the dielectric function, for a given wavevector grid, is made of numerous broadened Dirac delta contributions, each of which corresponds to one transition from a valence band to a conduction band. In order for such spectrum to look smooth, the broadening should be comparable to the typical spacing between delta functions. By contrast, the frequency-dependent Raman intensity is obtained by differentiating the dielectric function. Hence, the Raman intensity evolution corresponds to the superposition of a large number of derivatives of broadened delta functions, whose oscillatory character are much stronger than the broadened Dirac functions. This is reflected at the level of the Raman intensity. Having identified the problem, we design another strategy for sampling the BZ, that largely reduces the computational burden and memory requirements. In the same spirit as Ref. 44, but with a rather different implementation, we introduce a double-grid technique. We perform a set of BSE calculations, indexed by the label i, each with the same number of points in the BZ forming a "coarse" grid, differing by their shift s i : The shifts s i are chosen in order to obtain an homogeneous sampling of the subspace between the coarse points: with n div the number of subdivisions in each direction and h = (1/2, 1/2, 1/2). A 2D schematic representation is illustrated in Fig. 4. With this technique, the macroscopic dielectric function is obtained by averaging the differents results computed on the "coarse" grids: where ε(ω| {k} i ) is the macroscopic dielectric function obtained for the "coarse" computation with the grid {k} i , Eq. (24). Fig. 5 presents the results obtained with different coarse mesh samplings (different n k ), averaging over 64 calculations (n div = 4 is kept constant). Of course, when n k becomes very large, the Raman spectrum must tend to the same spectrum as without this double-grid technique. But the computational effort is largely reduced. Indeed, the residual fluctuation when going from n k = 14 to n k = 16 can be seen to be rather small already. In the RPA case, n k = 16 with n div = 4 corresponds exactly to a (64 × 64 × 64|(1/2, 1/2, 1/2)) uniform grid, that would be untractable in the BSE case. It is worth stressing that in the current method, we can take advantage of symmetries to reduce the number of "coarse" grid calculations, since some meshes are equivalent. For example, the number of required computations with n div = 4 falls down to 20 for the case where an atom is displaced and to 10 for the equilibrium position. V. ANALYSIS OF THE THEORETICAL RESULTS In this section, we analyze in more details the importance of excitonic effects on the Raman spectrum. A comparison between BSE and RPA results is reported in Fig. 6. Note how the excitonic effects amplify the Raman intensity by more than an order of magnitude in the band gap region. Much smaller amplifications are observed for low frequencies. Since the integral of the imaginary part of the dielectric susceptibility is related to the plasmon frequency ω p (the so-called f-sum rule): 45,46 and since ω p does not depend on atomic positions, the integral of the imaginary part of the Raman susceptibility vanishes: Accordingly, negative and positive regions are present in Fig. 7. On the basis of Eq. (28), we can see that the difference between the BSE and RPA results for the Raman intensity is due to the lowering of the energy, and the amplification of the main peak of the imaginary part of the Raman susceptibility. In the approximation for which the atomic displacement induces a global rigid shift of all the conduction bands with respect to all valence bands, with energy ∆ε = ε ck − ε vk (or, alternatively, if one transition dominates), the derivative with respect to an atomic displacement is related to the derivative with respect to frequency by: This relation shows that the amplitude of the Raman effect will follow the variation of the band structure with the atomic displacement. 1 As represented in Fig. 7, this approximation is only valid at the onset of absorption. In this range of energies, the curves are qualitatively on top of each other. This shows that the transition corresponding to the energy gap dominates in this range of energies. For higher energies, however, this approximation is not valid since each band can contribute differently from other bands in the Raman susceptibility. Fig. 8 shows the ab initio results for the Raman susceptibility of silicon, and compares these results with the experimental data obtained by Compaan and Trodahl, 27 who measured Raman intensity as a function of the frequency in silicon. In this figure, the theoretical results are obtained with a scissor value of 0.85 eV that reproduces the experimental gap at 0K (3.4 eV) instead of 0.65 eV, which reproduces the theoretical GW gap (3.2 eV). VI. COMPARISON WITH EXPERIMENTAL DATA In terms of absolute value, the polarizability a BSE = 19.75Å 2 obtained at 1.1 eV compares reasonably well with the experimental data of 23 ± 4Å 2 . The RPA value, a RP A = 15.87Å 2 , does not match the experimental value. This confirms the need to correctly describe excitonic effects even for energies well below the gap. We can distinguish three different regions with different behaviour: the low-energy (pre-resonance) region, from 2 eV to 3.2 eV, the band-gap region from 3.2 eV to 3.5 eV and the higher-energy region beyond 3.5 eV. The Bethe-Salpeter method allows reproduction of the experimental amplification of the Raman susceptibility with the frequency. Agreement using this method is sig- nificantly better than the agreement obtained by RPA. In the band-gap region, however, there is a discrepancy between the theoretical and the experimental maximal Raman susceptibility. The BSE maximum is nevertheless still closer to the experimental maximum than the RPA maximum. Moreover, it is important to note that the BSE theoretical results obtained in this work are valid only at low temperature, whereas the experimental data are measured at 300 K. The effect of the temperature on the absorption spectrum is illustrated in Fig. 9 where the first absorption peak at 10K is brought to lower energies at 297K, in agreement with the gap narrowing. On the basis of this observation, we can expect an improvement in the agreement if temperature effects are included in the ab initio calculations. As a first approximation, the effect of temperature on the gap energy can be mimicked by a rigid shift of the Raman intensity curve towards lower energies, as shown in Fig. 8 for data called "BSE (b)". With this correction, the agreement is highly improved in the low-energy part of the Raman susceptibility. The amplification factor and post-resonance are however not significantly improved. We attribute the disagreement to the different approximations we have performed so far, in particular within the BSE framework and the quasi-static approximation. Our ab initio approach is not able to describe the higher-energy region as well as the lower-energy region. Without the temperature correction, the BSE theoretical results underestimate the evolution of intensities in the pre-resonance region. The inclusion of temperature leads to a partial improvement in the overall agreement, except for the post-resonance region. The slope of decrease in the post-resonance region is in agreement with the experimental slope while the intensity value is underestimated. The present approach relies on several approximations, whose roles still need to be analyzed. We have neglected, among others, the phonon frequency in the "quasi-static" approach, the quadratic response with respect to atomic displacements, the self-consistency with the GW approximation, the non-Hermitian coupling within BSE, the frequency dependence of the BSE hamiltonian, indirect transitions, the additional effects due to phonons (the experimental data have been obtained at room temperature). The latter effects could be included using a method similar to the method presented by Marini et al. 48 Calculations including all these effects would require computational resources unavailable to us at present. Despite these effects, in all regions, the BSE results are in better agreement with experimental data than the RPA results. This clearly indicates the importance of excitonic effects for an accurate ab initio description of Raman spectra. Note also that silicon might possess specific characteristics that induce the good agreement here observed with the present method and associated approximations. Such a good agreement might not be observed for materials with different characteristics, like a lower crystalline symmetry, the presence of stronger spin-orbit coupling, the presence of some ionicity, the presence of multiple types of atoms, etc ... VII. CONCLUSIONS AND PERSPECTIVES A technique for the ab initio study of the resonant Raman intensity has been proposed, and applied to silicon. The technique relies on finite differences of the macroscopic dielectric function evaluated for distorted geometries, and includes excitonic effects by solving the Bethe-Salpeter equation. We found that convergence of the Raman spectrum with respect to the number of wavevectors used to sample the Brillouin zone is problematic. To tackle this problem, we proposed a double-grid averaging process that significantly improves convergence while keeping computational effort at a reasonable level. The Bethe-Salpeter results are in better agreement with experimental results than those results obtained without excitonic effects (RPA). For laser energies in the band-gap region or lower, the agreement is excellent if one takes into account the small rigid shift that is needed to align the first peak of the theoretical dielectric absorption, as well as the experimental shift for the same temperature as the Raman spectrum. The agreement is still not perfect however in the high-energy region. This may be attributed to many different effects, to be examined in future works.
2013-09-07T11:02:34.000Z
2013-09-07T00:00:00.000
{ "year": 2013, "sha1": "657af0682ff7516c31d0c87441fe75314f46c4c1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1309.1850", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "657af0682ff7516c31d0c87441fe75314f46c4c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
35429988
pes2o/s2orc
v3-fos-license
Cholethorax following percutaneous transhepatic biliary drainage. Editor, We report the case of a 51 year old man who developed the unusual complication of a bilious pleural effusion, or ‘Cholethorax’ following percutaneous transhepatic biliary drainage. Case Report: A 51 year old man with locally advanced gastric adenocarcinoma presented with painless jaundice one year following the completion of palliative chemotherapy. Laboratory investigations revealed a bilirubin level of 299 µmol/L with AST 117 U/L, ALT 134 U/L, GGT 2447 U/L, ALP 2159 U/L and an ultrasound of abdomen confirmed the presence of biliary obstruction. Percutaneous Transhepatic Cholangiography (PTC) was arranged as the presence of a gastric tumour precluded an approach using Endoscopic Retrograde Cholangiopancreatography (ERCP). The right hepatic duct was cannulated and contrast injected, demonstrating a complicated stricture of the common bile duct. An internal-external biliary drain was then inserted across this stricture to decompress the biliary tree and the position of the drain is shown in figure 1. Three days after the PTC our patient complained of severe right sided pleuritic chest pain and shortness of breath. A chest x-ray revealed right basal atelectasis and provisional diagnoses of a lower respiratory tract infection and possible pulmonary embolus were offered. Over the next 48 hours the patient became increasingly dyspnoeic, with signs of a right sided pleural effusion on Letters examination, and so a repeat chest radiograph was carried out (fig 2). The output of bile into the drainage bag had dramatically decreased and the bilirubin level had risen further to 387 µmol/L. A pleural aspiration was performed which yielded dark brown pleural aspirate with a bilirubin level of 766 µmol/L (fig 3). A diagnosis of a bilious pleural effusion (Cholethorax) as a complication of percutaneous transhepatic biliary drainage was made. The insertion of a 28F chest drain and rapid drainage of the bilious pleural fluid provided immediate relief of the shortness of breath and pleuritic chest pain. A further PTC was carried out urgently and three self-expanding metal stents were inserted across the complicated biliary stricture to provide adequate biliary drainage. Discussion: PTC and biliary drainage is used for the management of malignant biliary obstruction in cases where ERCP is inappropriate or has been unsuccessfully attempted. It involves the percutaneous cannulation of either hepatic duct followed by placement of a biliary drain to decompress the biliary tree and subsequent insertion of a stent during the initial procedure or a number of days later. During biliary cannulation it may be necessary to traverse the pleural cavity to gain access to either hepatic duct. An internal-external biliary drain is inserted consisting of a pig tail drain with a hole at the tip to allow the bile to exit into the duodenum and a number of side-holes along the distal length. These sideholes should be placed inside the common bile duct (Fig 1) to allow entry of the bile which then drains internally into the duodenum or externally into a drainage bag. In our patient's case the drainage catheter became dislodged with the tip remaining in the right hepatic duct while the side-holes formed a direct communication with the pleural cavity. This occurred due to the trans-pleural approach taken during the PTC and as a result bile rapidly drained into the pleural cavity causing a 'Cholethorax'. Bile is an intense chemoirritant and so extensive pleural inflammation was established which also allowed the chest drain to be removed relatively quickly as it essentially caused a pleurodesis Biliary pleural fistulas and the formation of bilious pleural effusions are known complications of hepatic trauma 1,2 , parasitic liver disease 3 and development of a subphrenic abscess in the setting of biliary obstruction. Iatrogenic causes include biliary stent migration 4 , radio-frequency ablation 5 and following cholecystectomy 6 and liver biopsy 7 However, it is the increasing use of percutaneous biliary drainage which has lead to the greatest number of cases. [8][9][10] For a Cholethorax to arise disruption of the pleural space needs to have occurred and this may not necessarily be obvious during the procedure. Rapid thoracentesis, correction of the cause of the fistula, adequate analgesia and the treatment of infective sequelae are essential in the management of this group of patients. Diffuse sclerosing variant of papillary thyroid carcinoma -a rare cause of goitre in a young patient Editor, Papillary thyroid carcinoma is the most common thyroid malignancy. We report a case of a rare variant -diffuse sclerosing papillary thyroid carcinoma (DSPC). Case History: An 18 year old girl presented with a smooth symmetrical goitre. She was clinically euthyroid and had no palpable cervical lymph nodes. Thyroid function tests and anti-thyroid peroxidase level were normal. Ultrasound scan of thyroid showed marked nodular enlargement of the entire gland in keeping with a multinodular goitre. A hypoechoic 1cm nodule was identified at the right lobe which was found to be 'cold' on radio-isotope scanning. A fine needle aspiration of this 'cold' nodule was reported as papillary carcinoma. She was booked for total thyroidectomy. At surgery she had an enlarged thyroid, with a gross appearance in keeping with a thyroidititis or lymphoma. Frozen section confirmed papillary carcinoma. The gland was hard and gritty. Several local lymph nodes were also excised. Post-operative recovery was uneventful. Sectioning revealed a diffusely firm, white, gritty gland (fig 1). Histopathology showed this to be the rare diffuse sclerosing
2014-10-01T00:00:00.000Z
2007-05-01T00:00:00.000
{ "year": 2007, "sha1": "ee6ace5445f9e0811340439b55ce4d5e7cd77d2f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ee6ace5445f9e0811340439b55ce4d5e7cd77d2f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5364840
pes2o/s2orc
v3-fos-license
Pharmacotherapy of pediatric and adolescent HIV infection. Significant advances have been made in the treatment of human immunodeficiency virus (HIV) infection over the past two decades. Improved therapy has prolonged survival and improved clinical outcome for HIV-infected children and adults. Sixteen antiretroviral (ART) medications have been approved for use in pediatric HIV infection. The Department of Health and Human Services (DHHS) has issued "Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection", which provide detailed information on currently recommended antiretroviral therapies (ART). However, consultation with an HIV specialist is recommended as the current therapy of pediatric HIV therapy is complex and rapidly evolving. Introduction The first cases of pediatric Acquired Immunodeficiency Syndrome (AIDS) were described in the United States (US) in 1983. 1 At that time, no specific antiretroviral (ARV) medications were available to children infected with HIV, most of whom succumbed to their infection. In 1990, zidovudine became the first approved pediatric ARV medication. HIV-infected children received zidovudine monotherapy with some clinical improvement but HIV infection remained a fatal disease. However, with the evolution of highly active antiretroviral therapy (HAART) during the 1990's, HIV-associated morbidity and mortality declined greatly, and HIV infection became a treatable and chronic disease for both adults and children. At the present time, most HIV-infected children in the US are surviving through adolescence to adulthood. In the US, most children acquire HIV via perinatal transmission, although infection may be acquired behaviorally (sexually or via injection drug use) in adolescents. Currently, there is no cure for HIV infection and viral eradication is not possible at this time. However, viral suppression to undetectable levels can be achieved by individualization of therapy, with careful selection of initial and subsequent treatment regimens. Maximal viral suppression is associated with delayed HIV disease progression and improved survival. 2 Maintaining adherence to prescribed regimens, preventing short-term and long-term drug toxicities, and managing drugresistance in HIV-infected children and adolescents receiving ART are the main challenges faced today in the treatment of pediatric HIV infection. Although many of the medications used for treatment are the same as those used in adults, issues unique to pediatric HIV infection include differences in mode of viral acquisition, immaturity of the immune system, and considerations of growth and development, Dovepress especially during puberty. Treatment decisions must include consideration of the child's age, treatment readiness, psychosocial milieu, past treatment history, as well as available drug formulations and treatment schedule. Possible drug teratogenicity must be considered in the treatment of adolescents who may be sexually active. Goals of ART include reduction of HIV-associated morbidity and mortality, preservation or restoration of immune function, and reduction of HIV viral replication. Whenever possible, consultation with a specialist in the treatment of pediatric or adolescent HIV infection is recommended as the current therapy of pediatric HIV infection is complex and rapidly evolving. This review summarizes the current "Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection." 3 The ARV medications discussed below apply primarily to highly resourced settings such as the US. Although zidovudine given perinatally to mother and child has been proven to successfully interrupt perinatal HIV transmission, use of post-exposure prophylaxis will not be discussed here nor will prophylaxis and treatment of opportunistic infections in HIV-infected children. 4,5 HIV virology/Life cycle HIV-1 is a human retrovirus which contains the enzyme reverse transcriptase. During acute HIV infection, the viral envelope glycoprotein gp 120 binds to the CD4 receptor on the mature human helper T lymphocyte (CD4 cell). Subsequently, the viral envelope binds to 1 of 2 chemokine coreceptors, either CXCR4 (X4) or CCR5 (R5). Most viral isolates acquired via sexual transmission utilize the R5 chemokine receptor (R5-tropic) while X4 viral strains tend to predominate in more advanced HIV infection (X4-tropic). A conformational change then ensues, resulting in fusion of the viral envelope with the CD4 cell membrane, after which the HIV viral RNA enters the cell cytoplasm. 6 The viral reverse transcriptase then transcribes the singlestranded viral RNA into host double stranded DNA. The viral enzyme integrase inserts the proviral DNA into the host genome via a strand transfer reaction. During subsequent activation of this infected CD4 T cell, the integrated proviral DNA is translated into new viral proteins. Assembly of these viral products then occurs on the cell surface, followed by budding of immature viral particles. The viral enzyme protease aids in the production of final infectious virions by cleaving viral polyprotein precursors, and generating new viral proteins. Active viral replication then occurs, eventually decreasing over time to a steady state level. A progressive immunodeficiency is seen over time, which may culminate in the occurrence of opportunistic infections, malignancy, and death. Overview of antiretroviral medications Since 1996, 31 ARVs have been approved for use, with 16 of these approved for use in children ( Table 1). Three of these medications (zalcitabine, amprenavir, and delavirdine) are no longer manufactured. There are currently six classes of medications for treatment of HIV infection that block various stages of the HIV life cycle by targeting key enzymes. ARVs such as reverse transcriptase inhibitors (both nucleoside reverse transcriptase inhibitors and non-nucleoside reverse transcriptase inhibitors), integrase inhibitors, and protease inhibitors prevent replication of the viral genome and intracellular viral maturation. Fusion inhibitors and CCR5 antagonists block the entry of virus into the CD4 cell either by interfering with viral fusion or co-receptor binding. Nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors all have pediatric indications, while entry inhibitors, CCR5 antagonist, and integrase inhibitors are newer agents used primarily for the treatment of multidrug-resistant HIV infection in adults. Pediatric experience with these newer medications is limited and their use is only currently recommended in cases of treatment failure. Most of the data regarding pediatric HIV treatment are derived from adult data or Phase I/II clinical trials as few phase III ART clinical trials have been performed in HIV-infected children. However, HAART regimens have been shown to be safe efficacious, and durable in HIV-infected children, even with longterm use. [7][8][9] The following will serve as an overview of the medications currently used in the treatment of HIV-infected children and adolescents. More detailed information on each medication can be found elsewhere. 3 Classes of antiretroviral medications Nucleoside reverse transcriptase inhibitors (NRTis) NRTIs were the first class of ARV medications approved for the treatment of HIV infection. NRTIs inhibit the viral enzyme reverse transcriptase, preventing transcription of viral RNA into host DNA. These nucleoside analogues are prodrugs that require intracellular phosphorylation for activation. The 6 NRTIs currently approved for use in HIV-infected children Thymidine analogues Zidovudine (Retrovir ® ) Zidovudine was the first ARV medication ever approved. It is available in liquid, capsule, tablet, and a concentrate form for injection or intravenous infusion. The oral form is usually administered twice daily (thrice daily in younger children), with the dose determined by weight or body surface area. Although it is generally well-tolerated by children, bone marrow suppression may occur, resulting in anemia and leukopenia. Other common side effects include headache, malaise, and gastrointestinal symptoms such as nausea, vomiting, and anorexia. Myopathy, myositis, liver toxicity, and fat redistribution are less common side effects. 10,11 Stavudine (Zerit ® ) Stavudine is available as a capsule or solution for twice daily administration. Common toxicities include headache, rash, and gastrointestinal symptoms. Less common side effects include peripheral neuropathy, lipodystrophy, serum transaminase elevation, and neuromuscular weakness. Although use of all NRTIs has been linked to lipoatrophy and hyperlactatemia, this effect is seen more often with stavudine. 12 Thus, at the present time, use of stavudine is limited to special circumstances (such as a young child with human leukocyte antigen [HLA] B*5701 who cannot be given abacavir, or a child with anemia who may experience additional hematologic suppression from zidovudine). 3 Nonthymidine analogues Lamivudine (epivir ® ) Lamivudine is a cytosine analogue that has been extensively studied in HIV-infected children. 11,13 It is available both as an oral solution and a tablet, that are well-tolerated with few side effects (headache, fatigue, anorexia, rash, abdominal pain, and insomnia). Less common side effects include pancreatitis, peripheral neuropathy, anemia, neutropenia, serum transaminase elevation, and fat redistribution. emtricitabine (emtriva ® ) Emtricitabine is a pyrimidine nucleoside analogue that is a fluorinated derivative of lamivudine. It was approved in 2003 for once daily dosing and is available in a palatable liquid formulation. Emtricitabine was shown to be safe, well tolerated, and durable in the treatment of HIV-infected children as young as three months. 14 Its longer half-life, higher oral bioavailability, and slightly greater potency in vitro led to its inclusion in combination with zidovudine over lamivudine as a preferred component of the dual NRTI backbone. 3 It may be taken with or without food, and has few drug interactions. Side effects are generally mild and include rash, headache, insomnia, diarrhea, nausea, and vomiting. Less commonly seen adverse events include neutropenia and skin hyperpigmentation which has been reported in African-American patients receiving emtricitabine. Didanosine (videx and videx eC ® ) Didanosine is an adenosine analogue that is available as a delayed-release capsule for once daily administration or a pediatric powder for oral solution. Although didanosine is best absorbed on an empty stomach, pediatric pharmacokinetic studies have shown that acceptable drug levels are achieved, although with delayed and slower absorption in children, when it is administered with food. 15 The tablets and the oral solution contain antacids which may affect the absorption of other medications. Didanosine's side effects are predominantly gastrointestinal, consisting of nausea, vomiting, diarrhea, and abdominal pain. Less commonly seen are peripheral neuropathy, electrolyte abnormalities and hyperuricemia. Pancreatitis, retinal changes, hepatotoxicity, and optic neuritis have also been reported. The virologic efficacy of combined didanosine, emtricitabine, and efavirenz has been proven in pediatric trials, resulting in its selection as a preferred dual NRTI agent in combination with emtricitabine. 3,16 Abacavir (Ziagen ® ) Abacavir is a guanosine analogue that has been shown to be safe and efficacious for long-term use in HIV-infected children. 13,17 The most common side effects associated with its use include nausea, vomiting, diarrhea, anorexia, fever, headache, and rash. 18 Although abacavir is a potent Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Dovepress suppressor of viral replication, it has the potential to cause a life-threatening hypersensitivity reaction in genetically predisposed individuals who possess the HLA allele B*5701. This hypersensitivity reaction occurs in 5%-8% of recipients, and consists of fever, rash, systemic, gastrointestinal and respiratory symptoms, usually during the first 6 weeks of therapy. Continuation of abacavir therapy has been associated with increased severity of symptoms, and fatality. Rechallenge with abacavir is contraindicated in anyone reporting past hypersensitivity reaction. Pharmacogenetic testing for the HLA phenotype B*5701 is recommended prior to use in HIV-infected patients, and abacavir should be withheld from those who test positive for this allele. HIV-infected children and adults receiving abacavir should be informed of this risk of hypersensitivity reaction, and instructed to immediately contact their provider regarding abacavir discontinuation, if fever and rash occur. 19 Other side effects of abacavir include nausea, diarrhea, abdominal pain, asthenia, headache, and elevation of serum transaminases or creatinine. Nucleotide analogues Tenofovir disoproxil fumarate (viread ® ) Tenofovir disoproxil fumarate is a nucleotide analogue of deoxyadenosine monophosphate approved for use in young adults 18 years. Its long half life allows for once daily administration. It is available only in tablet form, and may be taken with or without food, although its absorption is enhanced with food. Tenofovir is excreted unchanged by the kidneys; renal tubular toxicity is a rare side effect of therapy. However, monitoring of renal function is recommended during tenofovir therapy, and dose adjustment is necessary in renal failure. Other side effects include nausea, diarrhea, rash, and flatulence. Lack of pediatric dosing information and possible bone toxicity preclude the use of tenofovir in prepubertal children (Tanner Stages 1-3) or those 18 years. 20 However use of tenofovir may be considered in older postpubertal children. 3 Drug interactions with numerous ARVs are seen, and dose adjustment may be necessary. Tenofovir increases serum concentrations of didanosine, requiring decreased didanosine dose, while dose increases are necessary for tipranavir, since tenofovir decreases serum concentrations of this medication. Concomitant lopinavir/ritonavir may increase serum tenofovir levels, possibly resulting in greater tenofovir toxicity. 21 Tenofovir also decreases serum levels of atazanavir. If tenofovir and atazanavir are co-administered, ritonavir boosting should be used with the atazanavir; use of unboosted atazanavir with tenofovir is not recommended. Non-nucleoside reverse transcriptase inhibitors (NNRTis) The NNRTIs also inhibit the enzyme reverse transcriptase. By binding noncompetitively to the hydrophobic pocket close to the polymerase catalytic site of the reverse transcriptase, NNRTIs decrease reverse transcriptase polymerizing activity. NNRTIs include nevirapine, efavirenz, delavirdine, and etravirine, but only efavirenz and nevirapine have pediatric indications. Although NNRTIs have low pill burdens and cause less hyperlipidemia and fat maldistribution, the rapid emergence of class resistance, and high frequency of cutaneous reactions associated with these medications are drawbacks to their use. Both efavirenz and nevirapine have very long half-lives and may remain detectable for up to 21 days after discontinuation. Most NNRTIs are metabolized by the liver and drug interactions are possible with this class of medications. 3 efavirenz (Sustiva ® ) Efavirenz is the preferred NNRTI in children due to its low toxicity and once daily administration. However, it is formulated only as a capsule and approved only for children 3 years weighing 10 kg. Its absorption is enhanced on an empty stomach. Dosing for efavirenz is usually weight-based, but body surface area dosing has also been utilized, especially in regimens containing lopinavir/ ritonavir. 22,23 Efavirenz is well-tolerated, but neuropsychiatric side effects such as dizziness, fatigue, insomnia, agitation, impaired concentration, vivid dreams, and suicidal ideation may occur with its use. For this reason, it should be administered at bedtime, and cautious use is suggested in patients with psychiatric disease. Rash occurs less commonly than with nevirapine, but lipid abnormalities have been reported. Efavirenz is teratogenic and its use should be avoided in women of reproductive age unless adequate contraception is ensured. Efavirenz is a mixed inducer/inhibitor of cytochrome P450 3A4 enzymes and is associated with many drug interactions. It may reduce levels of lopinavir, saquinavir, and indinavir, if co-administered with these drugs. Concomitant administration of midazolam and triazolam should be avoided. Studies have demonstrated the virologic efficacy of efavirenz in combination with NRTIs and/or PIs in the treatment of pediatric HIV infection. 16,[24][25][26] Nevirapine (viramune ® ) Nevirapine was the first NNRTI approved by the US Food and Drug Administration (FDA). It is available both as a tablet and a liquid formulation, both of which may be given with or Dovepress without food. Nevirapine should be used in children 3 years, or in children requiring a liquid NNRTI formulation, if an NNRTI-based regimen is desired. It has been proven safe and efficacious in pediatric clinical trials. 27,28 The most common side effect is rash, seen in 5% of treated children, usually during the second week of therapy. 10 Erythema multiforme, Stevens-Johnson syndrome, and toxic epidermal necrolysis have been reported; nevirapine should be discontinued in the case of a severe rash; milder skin eruptions generally resolve within 1-2 weeks and do not require drug discontinuation. Initiation of half-dose nevirapine therapy, followed by stepwise dose escalation over a two-week period allows for autoinduction of cytochrome p450 metabolizing enzymes CYP3A and 2B6, resulting in increased drug clearance and decreased cutaneous toxicity. Although, life-threatening hepatotoxicity has also been reported in adults with higher CD4 counts, usually during the first 12 weeks of therapy, nevirapine-induced hepatotoxicity is seen infrequently in children. 29 Concomitant use of other potentially hepatotoxic medications should be avoided. Serum transaminase levels should be checked every two weeks during the first month of nevirapine therapy, then monthly for three months, and then every three to four months. Because of potential hepatotoxicity, nevirapine should not be used in post-pubertal females with CD4  250 or males with CD4  400, unless benefits of therapy outweigh the risks. 30,31 Nevirapine also has the potential for numerous drug interactions as it acts as an inducer of hepatic cytochrome p450 enzymes including CYP3A and 2B6. Certain PI doses may require adjustment if given with nevirapine. Two other NNRTIs have been approved for HIV-infected adults but not children. These include Delavirdine (Rescriptor) and Etravirine (Intelence). Delavirdine is no longer manufactured Etravirine is a second generation NNRTI active against viruses with resistance to first generation NNRTIs that is useful in treatment-experienced adults. However, this medication has not been adequately studied in children and cannot be recommended at the present time. 3 Protease inhibitors There are seven protease inhibitors (PIs) currently approved for use in children. The use of combination therapy with PIs in HIV-infected children and adolescents has been shown to reduce mortality by 67%. 32 PIs inhibit the viral enzyme protease by binding competitively to the enzyme, limiting the production of mature infectious virions. PIs have a high genetic barrier for the development of drug resistance and are potent inhibitors of viral replication. However, their use is complicated by high pill burdens, numerous drug interactions, and frequent metabolic side effects. Abnormalities of glucose metabolism (hyperglycemia, insulin resistance, or diabetes mellitus) and changes in body fat distribution (lipodystrophy) and dyslipidemias (hypertriglyceridemia, hypercholesterolemia) are commonly attributed to this class of medications. In addition, PIs have been associated with serum transaminase elevation, hepatitis, pancreatitis, and spontaneous bleeding in hemophiliacs. Unpalatable drug formulations and large pill size are drawbacks to PI use. Also, drug interactions occur frequently with this class of medications that are metabolized by the hepatic cytochrome P450 enzyme system, necessitating possible dose adjustments. For example, decreased levels of rifabutin, rifampin, and ketoconazole may be seen if these medications are co-administered with PIs and interactions may be seen with certain oral contraceptives. Many PIs are combined with low dose ritonavir in adults (boosted PI or pharmacoenhancer). Ritonavir inhibition of the cytochrome p450 3A4 enzyme results in diminished metabolism of the second PI. However, many boosted PI combinations have not been studied in HIV-infected children, and there is potential for enhanced drug interactions and hyperlipidemia. Current guidelines recommend the use of three boosted PI regimens (lopinavir, atazanavir and fosamprenavir) in HIV-infected children. 3 Lopinavir/ritonavir (Kaletra ® ) This co-formulated product is the preferred PI in children. It is available both as a liquid and a tablet, and may be administered to infants as young as two weeks. 3 Although co-administration with food is required for the liquid formulation, the tablet may be administered with or without food. The liquid formulation has a bitter taste, resulting in a poor patient acceptance rate. Common side effects include gastrointestinal symptoms such as nausea, vomiting, and diarrhea, rash, asthenia, lipid abnormalities, and headache. Changes in body fat composition occur less commonly than with other PIs, but lopinavir/ritonavir does interact with multiple medications including atorvastatin, ethinylestradiol, rifabutin, ketoconazole, itraconazole, and calciumchannel blockers. 33 Increased dosages of lopinavir/ritonavir are required if this medication is co-administered with nevirapine or efavirenz. Lopinavir/ritonavir has proven virologic and immunologic efficacy in the treatment of both treatment-naïve and treatment-experienced HIV-infected children. 9,[33][34][35] Nelfinavir (Viracept ® ) Nelfinavir was the first PI recommended for pediatric use and has been extensively studied in children. 9,36 At the Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Dovepress present time, nelfinavir is recommended as an alternative PI for children aged 2 years. 3 The liquid formulation is compounded by mixing nelfinavir powder with water. Nelfinavir requires administration with food and is usually dosed twice daily, but younger children require thrice daily administration. Nelfinavir is well-tolerated; diarrhea is the most common side effect. Other frequent side effects include asthenia, abdominal pain, rash, and lipid abnormalities. Less commonly, fat redistribution and exacerbation of chronic liver disease may occur. 30 In 2007, nelfinavir was temporarily banned for use in children aged 2 years after it was found to contain ethyl methanesulfonate (EMS), a potential carcinogen. 37 However, the product was reformulated and now meets acceptable EMS limits as per the FDA recommendation for use in children and pregnant women. 3,38 A number of drug interactions are seen between nelfinavir and certain antibiotics, proton pump inhibitors, corticosteroid preparations, and benzodiazepines. In addition, nelfinavir may interact with other ARV medications including a number of protease inhibitors, didanosine, and nevirapine. Atazanavir (Reyataz ® ) Boosted atazanavir is currently recommended as an alternative PI in children 6 years. Use of unboosted atazanavir in children aged 13 yrs (and/or 39 kg) is generally not recommended due to lack of pediatric dosing information, but may be considered for pediatric use under special circumstances (children aged 13 years weighing 39 kg). 3,39 Atazanavir is administered once daily with food. Its most common side effect is asymptomatic hyperbilirubinemia but jaundice, headache, fever, arthralgia, depression, insomnia, dizziness, nausea, vomiting, diarrhea, and paresthesias may occur. PR interval prolongation has been reported with first-degree and second degree atrioventricular block. Therefore atazanavir should be used cautiously in patients with arrhythmias, or those receiving other agents such as calcium channel blockers that may affect the PR interval. Skin rash is most often mild to moderate, but Stevens-Johnson syndrome has been reported. Compared with other PIs, atazanavir has less effect on serum lipids and causes less fat redistribution. Use of atazanavir is also complicated by numerous drug interactions, as it is both a substrate and inducer of the CYPA4 enzyme system. Only boosted atazavanir should be co-administered with tenofovir. As efavirenz and nevirapine may decrease plasma levels of atazanavir, nevirapine co-administration should be avoided; efavirenz and boosted atazanavir may be co-administered only in treatment-naïve patients but not in treatment-experienced patients. Caution is also needed with co-administration of H2-receptor antagonists and proton pump inhibitors. Atazanavir should be given one hour after buffered didanosine, if these 2 medications are given together. 39 Fosamprenavir (Lexiva ® ) Fosamprenavir is a prodrug of amprenavir that is formulated as both a liquid and a tablet that may be administered with or without food. Boosted fosamprenavir is currently recommended as an alternative PI in children aged 6 years, and for use without ritonavir in children aged 2 years under special circumstances. It is administered once daily to adults but twice daily dosing is recommended for HIV-infected children. 3 Selection of dosing regimen depends upon whether the patient is treatment-naïve or treatment-experienced. The main advantage of fosamprenavir use is a low pill burden; numerous drug interactions are the major drawback to its use. Side effects include rash, gastrointestinal symptoms, headache, perioral paresthesias, and lipid abnormalities. Less common side effects include severe rash, fat redistribution, neutropenia, and elevation of serum creatinine levels. Fosamprenavir is a sulfonamide that should be used cautiously in patients reporting sulfonamide allergy. 39 Ritonavir (Norvir ® ) Ritonavir has been available for long-term pediatric use. This potent PI is available as a liquid formulation that must be administered with food, but has a poor acceptance rate due to its bitter taste. Numerous drug interactions due to inhibition of the cytochrome P450 isoenzyme CYP3A complicate its use. 30 Side effects include gastrointestinal symptoms, headaches, circumoral paresthesias, and lipid abnormalities. Less commonly, fat redistribution and exacerbation of chronic liver disease may occur. Caution is recommended when administering this drug to patients with moderate-severe hepatic impairment. PR interval prolongation and allergic reactions including urticaria, angioedema, and bronchospasm have also been reported. 9 Saquinavir (invirase ® ) Saquinavir is not approved for use in children but is recommended for use in combination with low dose ritonavir under special circumstances. 3 Currently, it is available only in a soft gel tablet form that is administered twice daily with food. Side effects include gastrointestinal intolerance, paresthesias, headache, rash, and lipid abnormalities. Less commonly, fat redistribution and exacerbation of chronic liver disease may occur. Due to low bioavailability, this PI should never be used as a Dovepress sole PI and requires boosting with another PI such as ritonavir. 30 Numerous drug interactions also complicate its use. indinavir (Crixivan ® ) Indinavir is a PI administered only via capsule that requires thrice daily dosing. It is not available in liquid formulation and the risk of renal toxicity (hematuria, nephrolithiasis) limits its use in children. However, it may be considered for use, along with ritonavir boosting, in adolescents who weigh enough to receive adult dosing. It should be taken on an empty stomach and patients receiving this PI should be advised to increase their fluid intake. 30 Two other PIs commonly used for the treatment of HIVinfected adults, but not recommended for use in children at this time include tipranavir and darunavir. Tipranavir (Aptivus ® ) has proven useful for PI-experienced adults with multiple PI mutations in the viral genome. Although it was approved for use in children aged 2 through 18 years in 2008, lack of pediatric data limits its use in children. It is not recommended for initial therapy but may be useful in children with treatment failure. 3 It is available as a tablet and an oral solution and may be given with or without food. Drug interactions occur frequently. Common side effects include gastrointestinal symptoms, fatigue, headache, rash, lipid abnormalities, and serum transminase elevation. Less common side effects include fat redistribution, hepatitis/hepatic decompensation, and epistaxis. Rarely, intracranial hemorrhage has been seen. Darunavir (Prezista ® ), a potent PI with activity against multidrug-resistant HIV, was approved for use in HIV-infected adults in 2006. Although it was approved for children aged 6 years in 2008, this medication is not recommended for initial therapy in children secondary to high pill burden and limited pediatric data. However, its use may be considered in children with treatment failure. 3 It is available only in a tablet that should be given with food, and boosted with ritonavir. Side effects include gastrointestinal symptoms (nausea, vomiting, diarrhea, abdominal pain), headache, and fatigue. Less commonly, rash, fever, lipid abnormalities, and serum transminase elevation have been reported. Numerous drug interactions have also been described. Both darunavir and tipranavir contain a sulfonamide component and should be used cautiously in patients reporting sulfonamide allergy. Amprenavir (Agenerase ® ) is a PI that is no longer manufactured and has largely been replaced by fosamprenavir. Fusion inhibitors Fusion inhibitors inhibit the entry of HIV-1 into host cells by preventing fusion of the viral and cell membranes. enfuvirtide (Fuzeon ® ) Enfuvirtide is a synthetic 36-amino acid peptide that binds to the gp41 of the HIV viral envelope. It is administered twice daily via subcutaneous injection. 40 Enfuvirtide is recommended for use in treatment-experienced HIV-positive adults with advanced HIV infection. Although it was approved for children aged 6 years, its use is recommended only in cases of treatment failure. Most patients experience injection site reactions, which are usually mild. 41 Increased rates of bacterial pneumonia and local site cellulitis have been reported. Rare systemic hypersensitivity reactions have been reported in 1% of patients and require permanent discontinuation of this medication. entry inhibitors Entry inhibitors block the binding of HIV-1 to the human CCR5 chemokine receptor, preventing viral entry into the CD4 cell. Maraviroc (Selzentry ® ) Maraviroc is an R5-specific inhibitor approved in 2007 for use in HIV-infected patients aged 16 years with R5-tropic virus and multidrug-resistant virus. A co-receptor tropism assay should be performed prior to the initiation of therapy with any CCR5 inhibitor to confirm that a patient's dominant virus population is R-5, rather than X4. Only patients with R5-tropic virus would be expected to respond to Maraviroc; it is ineffective against those with dual-tropic or predominantly X-4 virus. Maraviroc is not approved for use in children aged 16 years, but may be useful in cases of treatment failure. Common side effects include cough, fever, upper respiratory infection (URI), rash, musculoskeletal symptoms, abdominal pain, and dizziness. Less common side effects include cardiovascular abnormalities and hepatic failure. There are multiple drug interactions which may affect its dosing. It may be given with or without food, but decreased absorption occurs with fatty meal administration. 6,42 integrase inhibitors Integrase inhibitors inhibit proviral DNA-strand transfer, interfering with insertion of viral DNA into the host genome. Raltegravir (isentress ® ) Raltegravir was approved in 2007 for use in treatmentexperienced HIV-infected patients aged 16 years with multidrug-resistant virus. It is dosed twice daily in adults but lack of pediatric data and unavailability of a pediatric formulation limit its use in children at this time. 43,44 Side effects include diarrhea, nausea, URI symptoms, and dizziness. Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Fixed drug combinations A number of fixed drug combination (FDCs) have proven convenient for ART administration in adults. There are currently five FDCs available including Combivir ® , Trizivir ® , Epzicom ® , Truvada ® , and Atripla ® . Combivir ® consists of zidovudine/lamivudine; Trizivir ® contains zidovudine, lamivudine, and abacavir. Lamivudine and abacavir are combined within Epzicom ® while Truvada ® contains combined emtricitabine and tenofovir. Atripla ® is a once daily pill containing emtricitabine, tenofovir, and efavirenz. The main advantage of FDCs is ease of use with lower pill burdens. However, providing the correct dose for children may be difficult with FDCs, and underdosage may occur in children who may require higher doses relative to body weight compared with adults. 37 Future treatment/novel therapies Pediatric studies are currently under way to determine the safety and efficacy of ARV agents such as etravirine and tenofovir, which are currently approved for HIV-infected adults, but not for children. 20 NRTIs in development for adult use include elvicitabine, racivir and apricitabine, while rilpivirine is a second generation NNRTI under study. 45 Sifuvirtide is a next-generation fusion inhibitor in development. 46 Experimental chemokine receptor inhibitors include the CCR5 antagonist Vicriviroc and CXCR4 antagonists. 47,48 Elvitegravir is a once daily integrase inhibitor being studied in adults. Ibalizumab is an anti-CD4 monoclonal antibody that interferes with HIV viral entry. New classes of ART under investigation include inhibitors of viral transcription, translation, and maturation. Recommendations from current pediatric guidelines The first guidelines for treatment of pediatric HIV infection were issued by the US Working Group on Antiretroviral Therapy and the Medical Management of HIV-Infected Children in 1993 and have since undergone numerous revisions. These guidelines include recommendations for the selection of initial and subsequent regimens, and contain both preferred and alternate regimens for the treatment of HIV-infected children, as well as detailed prescribing information for all pediatric and adolescent ARV medications. 3 Initiation of antiretroviral therapy in antiretroviral-naïve children The decision to initiate ART is based upon disease severity, risk of disease progession, and availability of appropriate and palatable drug formulations ( Table 2). Benefits of early therapy include preservation of immune function, prevention of disease progression, and possible lowering of the viral set point. Delaying the initiation of therapy may promote improved future medication adherence, reduce drug toxicity, and minimize drug resistance. Although the optimal timing of initial therapy is controversial in HIV-infected adults, most HIV specialists agree that ART is indicated for all HIV-infected infants aged 1 year as they are at high risk of disease progression. Early ART has been shown to decrease the risk of AIDS and death in perinatally-infected infants. 49 Treatment is also recommended for any child aged 1 year with moderate (most clinical category B conditions) or severe symptoms, including AIDS (Category C), regardless of CD4 count or HIV RNA level as HIV disease progression is likely to occur in these children (Table 3). 50 Current guidelines also advocate treatment of any child aged 1 year with CD4 lymphocytopenia for age, and for older children (5 years) who are symptomatic or have CD4 lymphocytopenia for age (Table 2). Thus, ART would be indicated for children aged 1 to 5 with CD4 25%, or children aged 5 years with CD4 350 cells/mm 3 . ART may be considered for children aged 1 year with normal CD4 for age and high viral load (plasma RNA 100,000 copies/mL), even in the absence of symptoms or presence of mild symptoms (clinical categories N, A, or the selected clinical category B conditions of bacterial infection or lymphoid interstitial pneumonitis (LIP). Treatment may be deferred in children aged 1 year who lack symptoms or have mild symptoms, and have normal CD4 for age and HIV RNA 100,000 copies/mL, as they are at low risk for HIV disease progression. Untreated children should be monitored closely over time with periodic CD4 counts, HIV RNA levels, and clinical follow-up to determine whether treatment should be initiated. Treatment readiness and the ability of the child and the caregiver to adhere to a prescribed regimen should be assessed prior to the initiation of therapy. Comorbid conditions such as tuberculosis, hepatitis B or C infection, or renal or liver disease may influence choice of ARV medication. Laboratory assessment prior to initiation of therapy should include CD4 count/percentage, HIV RNA level, viral resistance testing (genotype or phenotype), complete blood count with differential, serum chemistries including hepatic enzymes, serum lipase and amylase, and serum lipids (triglycerides and cholesterol). CD4 percentage is preferred for immunologic monitoring of children aged 5 years, whereas absolute CD4 count is a better immunologic parameter in older children. 51 Five viral load Types of regimens Most regimens include triple drug combinations consisting of two NRTIs with either an NNRTI (NNRTI-based regimen) or a boosted PI (PI-based regimen) ( Table 4). Under special circumstances, a triple NRTI regimen consisting of zidovudine, lamivudine, and abacavir may be used. 3 Advantages of NNRTI-based regimens include ease of administration with low pill burden and decreased incidence of lipid abnormalities and fat maldistribution, compared with PI-based regimens. 37 However, cutaneous side effects as well as potential drug interactions, and development of cross-resistance among NNRTIs are drawbacks to these regimens. PI-based regimens are potent but their use is complicated by high pill burdens, frequent drug interactions, and drug toxicity. 37 The main advantage of these regimens is a high genetic barrier to the development of drug resistance. 9 Triple NRTI regimens are generally well-tolerated by children as they have few side effects, pediatric formulation availability, and ease of use. 18 This type of regimen will preserve NNRTIs and PIs for future use. Also, because of the a risk of a life-threatening hypersensitivity reaction to abacavir, use of the triple NRTI regimen containing zidovudine, lamivudine, and abacavir is recommended only under special circumstances when a preferred or alternate NNRTI-based or PI-based regimen cannot be used. 3 Other NRTI combinations such as tenofovir/abacavir/lamivudine or tenofovir/didanosine/lamivudine are not recommended as initial therapy for HIV-infected children due to inferior virologic potency. 52 The "Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection"contain both preferred and alternate dual NRTI combinations to be used with either an NNRTI or PI-based regimens (Table 4). These various NRTI combinations have been studied and proven safe, effective, and durable in children. 13,16,36,53,54 The combination of zidovudine and lamivudine is the most studied in children and is well-tolerated. 11 Although less data is available on its pediatric use, the similarity of emtricitabine led to its selection as a possible substitute for lamivudine for dual NRTI use. 16 The therapeutic efficacy of zidovudine in HIV-infected children was first reported by the AIDS Clinical Trials Group (ACTG) 152 study team, which compared zidovudine monotherapy, didanosine monotherapy, and combined zidovudine/didanosine in 831 HIV+ symptomatic children. Combined zidovudine/didanosine therapy and didanosine monotherapy were proven superior to zidovudine. 55 A 2001 study compared the safety, tolerability, and efficacy of dual zidovudine/lamivudine versus triple therapy with zidovudine/lamivudine/abacavir in a cohort of 205 children and reported superiority of the three-drug combination compared with the two-drug regimen. 18 The "Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection" also lists ARV medications that are not recommended for use in HIV-infected children at any time. Single agent monotherapy or dual NRTI therapy Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Serious bacterial infections, multiple or recurrent (ie, any combination of at least two culture-confirmed infections within a 2 year period) of the following types: septicemia, pneumonia, meningitis, bone or joint infection, or abcess of an internal organ or body cavity (excluding otitis media, superficial skin or mucosal abcesses, and indwelling catheter-related infections) Candidiasis, esophageal or pulmonary (bronchi, trachea, lungs) Coccidioidomycosis, disseminated (at site other than or in addition to lungs or cervical or hilar lymph nodes) Cryptococcosis, extrapulmonary Cryptosporidiosis or isosporiasis with diarrhea persisting 1 month Cytomegalovirus disease with onset of symptoms at age 1 month (at a site other than liver, spleen, or lymph nodes) Encephalopathy (at least 1 of the following progressive findings present for at least two months in the absence of a concurrent illness other than HIV infection that could explain the findings): a) failure to attain or loss of developmental milestones or loss of intellectual ability, verified by standard developmental scale or neuropsychological tests; b) impaired brain growth or acquired microcephaly demonstrated by head circumference measurements or brain atrophy demonstrated by computerized tomography or magnetic resonance imaging (serial imaging required for children 2 years of age); c) acquired symmetric motor deficit manifested by 2 or more of the following: paresis, pathologic reflexes, ataxia, or gait disturbance Herpes simplex virus infection causing a mucocutaneous ulcer that persists for 1 month; or bronchitis, pneumonitis, or esophagitis for any duration affecting a child 1 month of age Histoplasmosis, disseminated (at a site other than or in addition to lungs or cervical or hilar lymph nodes) Dovepress should not be given, except for the use of zidovudine monotherapy in prenatal HIV exposure prophylaxis. Certain NRTI combinations (zidovudine/stavudine, lamivudine/emtricitabine, didanosine/stavudine) are not recommended for use as part of a HAART regimen. Other ARV medications not recommended for pediatric use include tenofovir, efavirenz (in the first trimester of pregnancy or in sexually active females of childbearing potential), and nevirapine (in adolescent females with CD4  250 cells/mm 3 or adolescent males with CD4  400, unless the benefits of therapy outweigh the risks). Pediatric use of unboosted saquinavir or combined indinavir/atazanavir is also not recommended. 3 In addition, the "Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection" include ARV medications/ART regimens that lack sufficient pediatric data at this time for use as initial therapy in treatment-naïve children. However, use of these agents may be considered for secondary therapy in certain treatment-experienced or older children/adolescents. 3 Use of dual PI regimens or use of boosted PI regimens other than the three recommended alternate regimens (lopinavir/ritonavir, atazanavir/ritonavir, and fosamprenavir/ritonavir) are not recommended nor is use of unboosted atazanavir in children aged 13 years or 39 kg. Three-class regimens (NRTI + NNRTI + PI) should not be used. Contraindicated NRTI combinations include zidovudine/stavudine, lamivudine/emtricitabine, and stavudine/didanosine. Use of agents in newer classes including tipranavir, darunavir, maraviroc, raltegravir, etravirine, and enfuvirtide is also not recommended for initial therapy in HIV-infected children or adolescents at the present time but may be useful in cases of treatment failure. Indinavir has studied mostly in small uncontrolled trials in HIV-infected children, and is not currently FDA-approved in the pediatric age group, nor is it recommended for use as initial therapy. Monitoring of children receiving antiretroviral therapy HIV-infected children and adolescents should be closely monitored following the initiation of ART to determine whether the prescribed regimen is tolerable, and whether side effects are occurring. This is especially important during the first few weeks of therapy when adherence may be compromised. Therefore, clinical or telephone follow-up one to two weeks after starting new medication or undergoing a regimen change is recommended. Treated children should return to the clinic one month after beginning a new regimen for clinical and laboratory assessment consisting of complete blood count and differential, serum chemistries, serum lipids, CD4 count/ percentage, and HIV RNA level. Increases in CD4 cell count/percentage and decreases in HIV RNA levels signify a response to ART. Ideally, ART should reduce HIV RNA levels to undetectable levels (HIV RNA 50 copies/mL). ART-treated patients should then be seen at regular intervals (every 3-4 months) to ascertain proper medication adherence, monitor for drug side effects, and to ensure efficacy of the medical regimen. CD4 count/percentage and plasma HIV RNA should be checked every 3-4 months. Serum amylase and lipase should be monitored in patient receiving didanosine and stavudine. A lipid panel should be obtained every 6-12 months. 3 Immune reconstitution inflammatory syndrome (IRIS) may be seen in children during the first three months of initiating a new ART regimen. This occurs clinically as a worsening of symptoms of inflammation or infection due to increases in CD4 count. At this time, treatment of IRIS Toxoplasmosis of the brain with onset at 1 month of age Wasting syndrome in the absence of a concurrent illness other than HIV infection that could explain the following findings: a) persistent weight loss 10% of baseline; OR b) downward crossing of atleast two of the following percentile lines on the weight-for-age chart (eg, 95th, 75th, 50th, 25th, 5th) in a child 1 year of age; OR c) 5th percentile on weight-for-height chart on two consecutive measurements,  30 days apart PLUS 1) chronic diarrhea (ie, 2 loose stools per day for 30 days), OR 2) documented fever for 30 days, intermittent or constant) Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Dovepress is largely empiric, although antibiotics, antivirals, and corticosteroids have been used. 56 Management of the treatmentexperienced child/treatment failure A prompt change in the ART regimen should be considered for HIV-infected patients who are experiencing clinical HIV disease progression, immunologic deterioration, drug resistance, or increasing HIV viral replication. Clinical features that may warrant a change in regimen include progressive neurodevelopmental deterioration, growth failure, or recurrent or severe infections. Any decreases in CD4 count or increases in HIV RNA levels should be confirmed by repeat laboratory testing prior to a medication switch. CD4 levels are based on age and absolute counts and percentages, and will normally decrease over time in children. Successful ART therapy should result in a 1.0 log 10 decline in HIV RNA level from baseline after two to three months of therapy, but a slower decline may be seen in patients with higher initial HIV RNA levels. HIV RNA levels  400 copies/mL after six months of treatment or detectable HIV RNA after one year of treatment may represent virologic failure. It has been estimated that 30%-80% of HIV-infected children will experience treatment failure within one year of treatment initiation. 14 Children with treatment failure should be evaluated for medication adherence, drug intolerance, and possible drug interactions which may lessen the efficacy of the therapeutic regimen. Frequent medication regimen changes are not advisable for HIV-infected children since this may limit future treatment options. Thus, in certain circumstances, children may benefit from continuation of an ART regimen with more frequent monitoring of CD4 counts and HIV RNA levels. Careful review of ART medication history and past drug resistance testing is recommended prior to initiating any medication change, as is a complete history and physical examination. When selecting a new regimen, it is important Dovepress to discontinue all medications to which the patient's virus is resistant, and to avoid the initiation of any agents with possible cross resistance, to ensure that all agents are fully active. Cross resistance occurs commonly among NRTIs and NNRTIs. Another consideration in changing regimens is the preservation of future treatment options including use of novel agents. Caregivers should always discuss adherence issues and select a medication regimen that is acceptable to the patient in terms of drug formulation, pill burden, dosing frequency, and meal requirements, prior to prescribing a new antiretroviral regimen. For children failing their first PI-based regimen, a change to an NNRTI-based regimen is recommended. Conversely, children failing a NRTI-based regimen should switch to a PI-based regimen. At least two and preferably three active agents should be used. In certain situations, use of a triple class regimen may be necessary if a potent dual NRTI backbone cannot be identified. Consultation with a pediatric HIV specialist is recommended in patients with multidrug-resistant virus or those with limited treatment options. In the case of treatment-experienced children aged 16 years failing their ART regimen, use of certain newer medications such as darunavir, maraviroc, or raltegravir may be considered. Enfuvirtide may also be useful for use in heavily treatment-experienced children and adolescents. 3 Drug resistance Viral resistance testing is recommended prior to initiation of ART or modification of a failing treatment regimen. Failure to maximally suppress viral replication may result in viral mutations that lead to drug resistance. However, a lack of drug resistance does not ensure that a medication will successfully reduce viral replication. Resistance assays may be genotypic (GT) or phenotypic (PT). GT assays detect specific viral mutations in patients with HIV RNA levels  1,000 copies/mL. PT assays directly assess whether a viral isolate can grow in the presence of an ARV medication, measuring the 50% or 90% minimal inhibitory concentrations of a drug against the virus in vitro compared with a laboratory strain of wild-type virus. A third type of resistance assay (virtual phenotype) predicts the phenotype based on viral genotype. Although a number of resistance assays are commercially available, none is preferred for use in adults or children. However, continued use of the same type of resistance assay over time is recommended for individual patients. The assay should be performed before or within four weeks of drug discontinuation since reversion to wild-type virus may occur within four to six weeks of regimen discontinuation. Careful review of antiretroviral medication history and consultation with a pediatric HIV specialist may be needed for interpretation of viral resistance data. The International AIDS Society-USA (IAS-USA) maintains a list of mutations associated with clinical resistance to HIV which is updated regularly. 57 Medication nonadherence should be suspected if persistent viremia is seen without evidence of viral resistance. Medication adherence Patients with poor medication adherence are at risk for the development of mutations and viral resistance. There are unique medication adherence issues that affect specific age groups. The caregiver/child relationship, HIV disclosure issues, and unpalatable drug formulations may adversely impact proper medication adherence in HIV-infected children, while denial of illness, lack of social support, and mental illness may impede proper ART administration in HIV-positive adolescents. For all treated patients, medication adherence must be evaluated at all medical visits. Any barriers to adherence should be promptly identified and addressed. Some patients may benefit from the use of adherence aids such as medication timers, beepers, or diaries, while others may require the use of intensive pharmacologic or nursing services such as special medication packaging, or directly observed therapy. To ensure the success of an ART regimen, providers should simplify the medical regimen, using the lowest pill burden and formulations that are acceptable to the patient. 58 Discontinuation/Interruption of therapy There are certain situations where temporary discontinuation of ART may be indicated. These include significant drug toxicity, acute gastrointestinal illness, surgery, sedation, or patient/caregiver request. Severe medication toxicity necessitates complete discontinuation of all ARV medications, but children with mild or moderate drug toxicity may not require an immediate change to the regimen, as some symptoms may resolve over time or be managed expectantly. ARV dose reduction is generally not recommended except in the case that therapeutic drug monitoring is available. 3 Structured treatment interruptions (STIs) have been studied in HIV-infected adults to reduce drug toxicity, medication costs, and to provide virologic modification, that is, to return the patient's virus to the wild-type virus state. 31 However, overall immunological results with STIs have been disappointing, and since minimal data exist regarding STIs in children, STI cannot be recommended at this time. Pharmacotherapy of pediatric Hiv Dovepress submit your manuscript | www.dovepress.com Conclusion Currently, HIV infection is often a chronic and manageable infection in adults and children. ARV medications from six classes are used sequentially in combination to suppress viral replication to maximal levels. Although the prognosis of HIV infection has improved significantly since the 1980's, viral resistance, drug toxicity, and medication nonadherence still present great challenges to successful treatment. Current practice guidelines for pediatric HIV infection provide updated recommendations to optimize the treatment of HIV infection. 3 The current complexity and rapidly evolving issues in HIV infection make it highly desirable for providers to consult with pediatric HIV specialists when caring for children infected with HIV. Additional pediatric studies are needed to develop new ARV medications, determine optimal ARV doses for HIV-infected children, enhance medication adherence, and to more effectively assess patients for drug toxicities and potential drug interactions.
2016-05-12T22:15:10.714Z
2009-06-18T00:00:00.000
{ "year": 2009, "sha1": "0032663e106d1820bf9ee3d12fe2a332bb52eed1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=4972", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cae9e754708449098c7f88cd3e6bd57b62e37ff6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268750999
pes2o/s2orc
v3-fos-license
Robot-assisted surgery and artificial intelligence-based tumour diagnostics: social preferences with a representative cross-sectional survey Background The aim of this study was to assess social preferences for two different advanced digital health technologies and investigate the contextual dependency of the preferences. Methods A cross-sectional online survey was performed among the general population of Hungary aged 40 years and over. Participants were asked to imagine that they needed a total hip replacement surgery and to indicate whether they would prefer a traditional or a robot-assisted (RA) hip surgery. To better understand preferences for the chosen method, the willingness to pay (WTP) method was used. The same assessment was conducted for preferences between a radiologist’s and AI-based image analysis in establishing the radiological diagnosis of a suspected tumour. Respondents’ electronic health literacy was assessed with the eHEALS questionnaire. Descriptive methods were used to assess sample characteristics and differences between subgroups. Associations were investigated with correlation analysis and multiple linear regressions. Results Altogether, 1400 individuals (53.7% female) with a mean age of 58.3 (SD = 11.1) years filled in the survey. RA hip surgery was chosen by 762 (54.4%) respondents, but only 470 (33.6%) chose AI-based medical image evaluation. Those who opted for the digital technology had significantly higher educational levels and electronic health literacy (eHEALS). The majority of respondents were willing to pay to secure their preferred surgical (surgeon 67.2%, robot-assisted: 68.8%) and image assessment (radiologist: 70.9%; AI: 77.4%) methods, reporting similar average amounts in the first (p = 0.677), and a significantly higher average amount for radiologist vs. AI in the second task (p = 0.001). The regression showed a significant association between WTP and income, and in the hip surgery task, it also revealed an association with the type of intervention chosen. Conclusions Individuals with higher education levels seem to accept the advanced digital medical technologies more. However, the greater openness for RA surgery than for AI image assessment highlights that social preferences may depend considerably on the medical situation and the type of advanced digital technology. WTP results suggest rather firm preferences in the great majority of the cases. Determinants of preferences and real-world choices of affected patients should be further investigated in future studies. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-024-02470-x. Background As a result of the technological changes of the past few decades, healthcare has seen an increasing uptake of digital technologies, including robotics, artificial intelligence (AI) and machine learning [1][2][3].Fast and efficient data handling, reduced workload, option of remote control with physical separation and improved accuracy are just some of the key factors that have played a major role in their adoption for various functions, including administrative tasks, data processing, telemedicine and patient education [4,5]. One of the main areas of application is diagnostic imaging, where AI-based methods have shown promising results in terms of accuracy, sensitivity and specificity in the segmentation and interpretation of radiological images [6][7][8].The use of AI in the clinical environment, however, is not without its limitations.The most common challenges include the need for time-consuming and resource-intensive user training, high hardware requirements, insufficient integration into the clinical workflow, ethical and legal implications, and the lack of or limited transparency of operation, which can lead to uncertainty about the accuracy and reliability of the results [9,10]. Another area of focus is surgery, where the assistance of robots might offer numerous advantages over conventional methods, including increased accuracy, better implant positioning and improved radiological outcomes, as well as ergonomic benefits and reduced workload for surgeons [11][12][13][14].However, some concerns have also been raised, such as the scarcity of long-term follow-up data and uncertain results on patient outcomes such as functioning, quality of life and perceived pain levels [12,15].Furthermore, several studies failed to prove improvements in complication rates compared with conventional methods [16]. While most clinicians are aware of the existence of new advanced digital health technologies, they have limited real-world experience, evidence-based knowledge and well-established clinical guidelines and, therefore, remain unconvinced about the reliability and accuracy of the results [17].This can be a barrier to the adoption of new technologies in clinical practice, as acceptance and learning of their use by healthcare professionals play a key role in the process [18].However, patient attitudes towards complex digital health technologies may also have a significant impact on their implementation.Patient informed consent is essential and also clinicians are more likely to have positive attitudes and adopt technologies that are better accepted by their patients [19].A number of factors have been shown to improve patient acceptance of advanced digital health technologies, such as use in lower-risk conditions, proven higher accuracy compared to human professionals, or even when the technology is recommended or preferred by the treating physician or healthcare provider [20][21][22][23][24]. Despite their increasing adoption and use, there is a scarcity of studies reporting on patients' perspectives and outcomes, as well as on social attitudes and preferences towards advanced digital health technologies [15,25].Exploring preferences of the society is relevant as, on the one hand, it includes the potential target patients and the social environment (e.g., patients' family members, acquaintances) that might influence their health-related decisions.On the other hand, while we acknowledge that societal acceptance of a new health technology can be driven by a broad range of factors besides evidence on health outcomes, revealing social preferences can give an approximate idea about the expected societal endorsement of their financing decisions. Willingness to pay (WTP) is a valuation method that allows for assessing social preferences for a diverse range of products and services, including health technologies [26,27].In healthcare, measuring WTP is based on the assumption that the value and benefits of a given health technology can be determined by examining the ability to make trade-offs between the consumption of goods and factors that may improve health [28].WTP is particularly well suited to measuring the values and benefits of technologies that have a multifaceted nature [29,30].A recent systematic review identified studies that used alternative valuation methods, including WTP, to examine the socio-economic and health benefits of medical devices, non-device health technologies and methodologies [31].In the majority of studies, the WTP valuation method was used.Although, only one study evaluated a digital technology (robotic radiosurgery), and despite the digital developments in orthopedics, rheumatology or radiological image analysis, no study was found that assessed a digital technology in these segments [31]. Therefore, given the limited number of preference elicitation studies, there is a need to evaluate advanced digital health technologies from a broader perspective and to provide information on social preferences for their uses.This information would be of particular interest to clinicians (shared decision-making), researchers and developers (to guide future development directions and design clinical interventions), and health policymakers.It might also be relevant from a public health perspective, as it can drive the attention to subgroups that have reservations towards complex digital health technologies, thus patient education is particularly important in their case.The primary aim of this study was to assess social preferences for the use of different advanced digital health technologies in surgery and diagnostic imaging, as well as to examine the strength of preferences using WTP method.The secondary objective was to investigate how these preferences and WTP are associated with sociodemographic characteristics, health status and electronic health literacy.The results obtained with the two types of advanced technology are compared indirectly. Study description The present study was part of a larger survey on the knowledge about and attitudes towards implantable medical devices in the Hungarian population, details have been reported elsewhere [32].The online cross-sectional study was conducted in July of 2021, involving a sample of the Hungarian general population aged 40 years and over.Quota sampling was applied to ensure the representativeness of the sample for sex, age, education and type of residence.Data collection was carried out by a survey company, participants were recruited from a commercial online panel.Dropout rates and the size of the sampling frame was confidential information, and have not been released by the survey company.The targeted sample size was 1400 respondents.Ethical approval was granted by the Hungarian Medical Research Council (no.IV/5651-1/2021/EKU). Respondents were informed that participation in the survey was voluntary, that their data would remain anonymous and would be used for scientific purposes only.Participants provided written informed consent before the start of the survey. The questionnaire The survey consisted of three modules: (1) the epidemiology of and patients' knowledge about implantable medical devices (IMDs) [32]; (2) subjective preferences for robot-assisted (RA) hip replacement surgery and AIbased assessment of a preoperative imaging scan; (3) subjective expectations for having IMDs at older ages.In this paper, results of the second module are presented.Survey questions translated into English are presented in Online Resource 1. The socio-demographic characteristics of the sample, such as respondents' sex, age, educational level, residency, family status and working status, were surveyed.Monthly net household income was recorded in 11 predefined categories, increasing equally with 140 EUR in each category, starting from 0 to 140 EUR and ending at 1260-1400 EUR in the 10th category.It was possible to indicate in an 11th category if the monthly net household income exceeded 1400 EUR. In addition, electronic health literacy and general health state were surveyed with the eHEALS and the EQ-5D-5 L measurement tools, respectively.A detailed description of these outcome measures is provided below.A predefined list of IMDs was used to survey whether the respondent ever had or has an IMD. Measurement tools Electronic Health Literacy Scale (eHEALS) The eHEALS was developed to measure respondents' self-assessed knowledge, confidence, and ability to find, understand and use health-related electronic information resources [33].The tool consists of 8 questions that can be rated on a 5-point scale (1 -'strongly disagree'; 5 -'strongly agree').To calculate the total score, the points for each question are summarized, resulting in a final score of 8 to 40.Higher score indicates higher e-Health literacy.In the present study, the validated Hungarian version of the eHEALS was used [34]. Respondents are asked to indicate what best describes their actual state in each dimension on a 5-level Likert scale (response options: 1 -no problems, 2 -slight problems, 3 -moderate problems, 4 -severe problems, 5 -unable to /extreme problems).Given all the possible combinations, there are 3125 health states that can be distinguished with the EQ-5D-5 L. The EQ-5D-5 L index score can be calculated by attaching preference-based scores, i.e. utility values to these states.In this study, the EQ-5D-5 L value set for Hungary was used [36].The EQ VAS, as the second part of the EQ-5D-5 L, measures respondents' actual self-reported overall health on a visual analogue scale, ranging from 0 to 100, indicating the worst and best health states the respondent can imagine. Stated preferences and willingness to pay for hip replacement Respondents were put into two hypothetical decisionmaking situations.In the first task, they had to imagine that they needed hip replacement surgery due to a gradually developing disease that limited their everyday activities.It was explained that a surgical robot has been developed that is able to perform some phases of the operation completely autonomously.In the case of an adverse event, the doctor could still switch off the robot at any time, and take over the operation.The traditional and the RA procedures were described as equally safe and produce the same results per outcome.Participants were asked to choose which method they would have preferred: a surgery performed primarily by a human surgeon (conventional surgery) or an RA surgery. Next, each respondent was assigned to the intervention with the opposite method to the one they had chosen.Respondents were asked how much money they would be willing to pay to have the operation made by the preferred method chosen by the respondent in the previous question.Willingness to pay was recorded in the following 9 categories: 0 EUR (representing no willingness to pay); 0-28 EUR; 28-84 EUR; 84-140 EUR; 140-280 EUR; 280-560 EUR; 560-1120 EUR; 1120-2240 EUR; 2240 < EUR.For the highest category (2240 < EUR), respondents were asked to indicate the amount of money they were willing to pay.(The upper price range was set to be close to the market price at the time of the questionnaire survey.) Stated preferences and willingness to pay for radiological image assessment task In the second task, respondents were presented with a scenario in which a mandatory imaging scan prior to the hip replacement surgery revealed a suspected tumour.The treatment depends on whether the tumour is benign or malignant.They were asked whether they preferred a radiologist to analyse the image and establish the diagnosis or an AI, i.e., a computer algorithm trained to make a diagnosis based on the analysis of thousands of similar cases. Respondents were then informed that the assessment had been carried out with the contrary method (but no information was provided about the result) and were asked how much money they would be willing to pay in order to obtain a secondary expert opinion with their preferred method.The following 9 categories, based on the market price of the interventions at the time of the survey, were used to record willingness to pay: 0 EUR (representing no willingness to pay); 0-3 EUR; 3-14 EUR; 14-42 EUR; 42-70 EUR; 70-98 EUR; 98-280 EUR; 280-839 EUR; 839 < EUR.For the highest category (839 < EUR), respondents were asked to indicate the amount of money they were willing to pay. Respondents' assessment of the difficulty of the tasks After each task, respondents were asked to rate their level of agreement with 'Questions about hip replacement surgery were difficult to answer' and 'It was difficult to answer the questions about the evaluation of the images' , on a 7-level scale (1: totally agree; 4: neither agree nor disagree; 7: totally disagree).Furthermore, participants who reported any level of difficulty also had to indicate the reason why they found it difficult to answer the questions using the following response options: because it was difficult to understand the situation caused by the outlined condition, imagine the need for hip replacement/the suspicion of having a tumour, understand the two medical procedures, choose between the two medical procedures, or indicate the amount they would be willing to pay.If participants found it more suitable they could also provide free-text responses as 'Other' category. Statistical analysis Background factors (socio-demographic characteristics, health status, electronic health literacy), respondents' choices and WTP were analyzed with descriptive statistical methods.Differences by subgroups were tested with Chi-square, Mann-Whitney U, and Kruskal-Wallis tests for categorical variables, and with two sample t-tests for continuous ones. Respondents' income and WTP were recorded in Hungarian forint and converted subsequently to Euro for the analysis.The used exchange rate was 357.49HUF/EUR. In the survey, we included response options 'Do not know' and 'Do not want to answer' for the incomerelated questions.Such responses were treated as missing values and were excluded from the analysis. Monthly net income per capita was calculated by dividing the middle point of each income category by the number of household members.The method proposed by Parker and Fenwick was used to determine the mean value of the top income category [37].Furthermore, respondents were divided into 5 groups based on their monthly net income per capita, reflecting which national income quintile they belong to.The second, third, fourth and fifth quintiles were calculated from the average of the third to eighth national income deciles given by the Hungarian Central Statistical Office [38]. WTP was converted and treated as a continuous variable by assigning the middle value of the corresponding category to each respondent.In the highest category, the exact values provided by the respondents were used. The correlation of willingness to pay with background variables was investigated by calculating Pearson's correlation.The correlation was considered strong over 0.5, moderate between 0.5 and 0.3, and weak under 0.3 [39].The normality of continuous variables was examined with the Shapiro-Wilk test [40]. Multiple linear regression analysis was carried out to assess which factors are associated with the respondents' WTP.Two separate regression models were developed for the two WTP tasks.In both models, the dependent variable was the amount of money offered by the respondent in the respective WTP task.The independent variables were the preferred method chosen for hip replacement surgery (Model 1) and radiological image assessment (Model 2).In addition, both models were controlled for socio-demographic variables (sex, age, education, health education, residence, employment status, marital status, living with someone in the household), income, eHEALS score, EQ-5D-5 L index, whether the respondent had an implant, and level of difficulty at answering the WTP task questions.All categorical variables were dummy-coded before the analysis.The constant term was excluded from the analysis in both models. The significance level of 0.05 was applied for all statistical tests. Statistical analysis was performed in Stata 17 software (StataCorp LCC., College Station, TX, USA). Sample characteristics Altogether, 1400 respondents completed the survey.Main characteristics of the sample are summarized in Table 1.The average age was 58.3 years (SD = 11.1) and 53.7% were women.Among the 584 participants (41.7%) who ever had at least one implant surgery, 33 reported having had a hip implant, and 32 were still living with that at the time of the survey.The average eHEALS score was 28.1 (SD = 5.8) on the 8-40 scale (men: 27.9, SD = 5.9; women: 28.3, SD = 5.6).The average EQ-5D-5 L index score was 0.83 (SD = 0.26) and EQ VAS was 75.1 (SD = 19.9). Stated preferences for hip replacement surgery and radiological image evaluation In the hip replacement task, 762 (54.4%) respondents preferred the robot-assisted (RA) surgery.Analysis by patient characteristics revealed that respondents' preference was significantly associated with sex as compared to the women-men ratio observed in the total sample (53.7% vs. 46.3%respectively) the proportion of women was higher for the conventional (58.8%) and lower for the RA method (49.5%).(Table 1) The two subgroups did not differ significantly by age (means: 57.9 years, SD = 11.4 for the conventional and 58.7 years, SD = 10.8 for the RA surgery, p = 0.21).The average net income per capita was 378.2 (SD = 9.3) EUR and 446.3 (10.0)EUR in the subgroups choosing the conventional and the RA hip replacement surgery, respectively.The two subgroups differed significantly by income quintile group (individuals with higher income are more likely to choose the RA surgery).No significant difference was found between subgroups based on whether or not they had any IMD in the respondents' history. In the image evaluation task 470 (33.6%) respondents chose the AI-based image assessment.Respondents who preferred to have their radiological image analysed by AI were significantly older than those who chose the radiologist (means: 59.8 years, SD = 10.9 vs. 57.6 years, SD = 11.1, respectively; p < 0.05).The average net income per capita was 403.7 (SD = 8.6) EUR and 438.1 (SD = 11.8)EUR among those who chose the radiologist or the AI to make the diagnosis, respectively.However, no significant difference was found by income quintile groups.Respondents who had IMD were more likely to choose the AIbased image assessment. In both tasks, respondents who opted for the digital health technology had significantly higher levels of education compared to those who opted for the conventional method.However, there was no difference according to whether respondents had any degree in health education.Those who chose the conventional method had significantly lower eHEALS scores compared to those who chose the digital technology both in the hip replacement surgery and the radiological image assessment task, although the observed differences were considered small as measured with the Cohen's D (D=-0.159,95% CI -0.264 --0.053 and D=-0.145, 95% CI -0.256 --0.034, respectively).No significant differences were observed in respondents' health status as measured by the EQ-5D-5 L index score and EQ VAS in the two groups. There was a great amount of respondents in the total sample who chose the physician (surgeon or radiologist; N = 504, 36.0%) for both tasks, their mean age was 57.4 (SD = 11.4) years, 58.1% of them were women.Fewer respondents chose the advanced digital technology (N = 336, 24.0%) in both cases, they were slightly older with a mean age of 59.6 (SD = 10.9) years and there were fewer women (47.3%) among them. Willingness to pay for hip replacement surgery In the hip replacement surgery task, about one-third of participants were not willing to pay to have the intervention performed with the method of their preferred choice.The maximum amount offered was 5315 EUR for the conventional and 2797 EUR for the RA surgery.(Table 2.) The average amount of money that the respondents were willing to pay did not differ significantly between respondents who chose conventional or RA surgeries.In both subgroups, significant differences were observed by educational level and income groups.(Table 3.; Online resource 2) More than a third of respondents totally disagreed that questions about hip replacement were difficult to understand (31.4% in the conventional and 38.7% in the RA surgery subgroups), and a similar proportion was neutral (neither agreed nor disagreed) in the conventional surgery subgroup (33.4%), but just over a fifth in the RA surgery subgroup (22.7%).(Online resource 3.) For those who reported difficulties with at least one of Differences in the values of binary, ordinal and continuous variables were compared with Chi-square a , Mann-Whitney U b and two sample t-tests c , respectively d 'Do not know' and 'Do not want to answer' responses were treated as missing values and excluded from the analysis the pre-defined response options in the surgeon (N = 438) and the RA surgery (N = 467) subgroups, the most commonly indicated problems were deciding on the amount of money offered (41.8% and 49.0%, respectively), choosing between the two methods (41.8% and 33.2%, respectively), and imagining the need for the intervention (37.9% and 40.0%, respectively).Figure 1.shows the frequency of reasons why respondents found questions difficult to answer in the two WTP exercises. Willingness to pay for radiological image assessment Nearly one-third of participants who originally chose to have their image assessed by a radiologist were unwilling to pay any money for a secondary expert opinion from an AI.Of those who chose AI, more than 22% were willing to pay to have their image analysed with the method of their preferred method.In this task, no one offered more than 559 EUR for any of the options.(Table 2.) Willingness to pay was significantly lower for respondents choosing the radiologist compared to those who chose the AI to assess their image and make the diagnosis, however the effect size was small (Cohen's D -0.181, 95% CI -0.292 --0.070).In both subgroups, significant differences were also observed along educational level and income groups.(Table 3.; Online resource 2.) At the question of whether it was difficult to give an answer in the radiological image assessment task, the level of disagreement was 34.0% and 42.8% in the subgroups choosing the radiologist or the AI to make the diagnosis, respectively.The proportion of those who neither agreed nor disagreed was 30.1% and 23.2%, respectively.(Online resource 3.) Any difficulties with the task were reported by N = 614 and N = 269 respondents in the radiologist and AI subgroups, respectively.The most commonly reported difficulties were deciding on the amount to pay (45.9% and 46.5% in the two groups) and choosing between the two available methods (47.6% and 37.2% in the two groups).(Fig. 1.) Subsample indicating zero WTP in both tasks In total, there were 341 (24.4%) people who did not want to pay money in any of the tasks.Their average age was 58.1 (SD = 9.9) years, 49.9% of them were female, and they had significantly (p = 0.004) lower average net income per capita compared to those who had a WTP greater than zero (379.7 EUR, SD = 235.7 vs. 426.6EUR, SD = 239.6).Among them, more than 40% totally disagreed that it was difficult to answer the questions regarding the tasks, which was numerically higher compared to those who expressed their WTP (42.5% vs. 33.1% for hip replacement and 44.0% vs. 34.7%for the radiological image analysis task).Those who had zero WTP were less likely to report any problems with understanding the tasks than those who were willing to pay (57.5% vs. 66.9% for hip replacement and 56.0% vs. 65.3% for radiological image analysis task, respectively). Correlations between willingness to pay and background variables The results of the Shapiro-Wilk test indicated that the continuous variables included in the analysis followed a non-normal distribution.In the total sample, the correlation of WTP with age and income was significant but weak in both the hip replacement surgery (r = 0.107, p < 0.001 and r = 0.162, p < 0.001, respectively) and radiological image assessment tasks (r = 0.093, p < 0.001 and r = 0.179, p < 0.001 respectively).With the eHEALS, the observed correlations were also weak and significant in case of radiological image assessment (r = 0.071, p = 0.008), but not for hip replacement surgery (r = 0.019, p = 0.481).There was no significant correlation between WTP and the EQ-5D-5 L index score in any of the tasks.With the EQ VAS, a significant but also weak correlation was found for the radiological image assessment (r = 0.088, p = 0.001), but not for the hip replacement surgery task (r = 0.046, p = 0.084). Regression analysis The results of the regression analysis can be seen in Table 4.When controlling for respondent characteristics, WTP was lower if the preferred choice of intervention was robotic surgery than if the respondent preferred to be operated by a surgeon (Model 1).However, this relationship was not significant for radiological image assessment (Model 2).In both models, WTP was significantly higher if the respondent had a higher income, and this relationship was stronger for hip replacement surgery. Other factors significantly associated with WTP were age (Model 1), and eHEALS score (Model 2).Sociodemographic characteristics not associated with WTP in any of the tasks were sex, education, residency, employment, whether the respondent was in a relationship and whether the respondent lived with someone in the household.Similarly, the respondent's health status as measured with the EQ-5D-5 L and whether the respondent had any implant were not significantly related to WTP.The level of reported difficulty in either task was also not associated. Discussion The aim of this study was to assess social preferences for advanced digital health technologies using the hypothetical examples of robot-assisted (RA) hip replacement surgery and AI-based radiological image analysis for tumour assessment.Slightly more than half of the respondents (54.4%) opted for RA hip replacement surgery over conventional surgery, whilst a smaller proportion (33.6%) chose AI over radiologist for the image assessment.Significant difference was found between the two subgroups in both hypothetical situations along educational level (i.e., more educated respondents chose the digital technology) but income level was different only in the hip replacement task.In the WTP part, only about one-third would not have been willing to pay to be treated by their preferred method if they had been assigned to the opposite method for hip replacement surgery (surgeon: 32.8%, RA: 31.2%), and somewhat fewer respondents indicated zero WTP in the image analysis task (radiologist: 29.2%; AI: 22.5%).The regression analysis revealed significant association between WTP and respondents' income level in both hypothetical situations.In addition, in the hip replacement task, WTP was higher if the respondent preferred the conventional surgery and increased with age. In the image evaluation task, a positive association was observed between WTP and eHEALS.Compared to previous studies, our results showed a higher level of acceptance of RA surgery for hip replacement.In 2023, Abdelaal and colleagues (USA) found in a questionnaire survey among potential candidate patients for total knee arthroplasty that patients would primarily prefer the conventional surgery to the RA procedure, with just over 40% opting for the advanced technology [20].The design and sampling of the study by Muaddi and colleagues (Canada, USA) was more similar to ours, nonetheless, just over a third of participants chose the RA procedure over laparoscopic surgery [42].In contrast, we observed that more participants opted for the RA procedure in the hip replacement surgery task.It would be worth examining how technological, economic, cultural and healthcare system differences between countries influence the acceptability of and preferences for RA surgeries. The preference for complex digital technology seems to depend on the type of medical procedure.In the radiological image evaluation task, the great majority, around two-thirds of the respondents, preferred a radiologist rather than an AI.This observation is consistent with the findings of Juravle and colleagues who reported in 2020 Fig. 1 Frequency of reasons why respondents found questions difficult to answer in the two WTP tasks * Percentages refer to the proportion of a given reason among respondents who reported at least one difficulty.Percentages do not add up to 100% as respondents could indicate more than one difficulty that participants have less confidence in a diagnosis made by an AI than a human physician [43].Another possible explanation for the differences in acceptance could be that those who preferred the radiologist assessment over AI reported the most difficulties with understanding the description of the procedures.This may indicate that these respondents were likely to have less knowledge of the technology in general.However, it has been previously described that awareness does not correlate with the adoption of clinical AI [24].We also found that those who chose the AI-based procedure were older on average than those who chose the radiologist.It was not the aim of our study to analyse factors influencing choice preferences among the elderly, however, it has been previously described in the literature that older people's preferences for advanced technologies are shaped by a combination of factors such as technology concerns, expected benefits, available alternatives and social influences [44]. Educational level seems to play a key role in openness to digital health interventions, as those who chose the digital technology in both tasks had higher levels of education and e-health literacy.Other sociodemographic factors had variable importance in the two hypothetical situations.For instance, those who chose RA hip replacement had higher household income, while image evaluation was not associated with income.In terms of current health status, no difference was observed.Previous health experiences might have had an impact on preferences, however, this was only limitedly testable in our study.Since the study involved respondents from the general population, only a very small portion of respondents were expected to have direct experiences with the medical situations described in the tasks.Therefore, setting a cut-off point of 40 years for inclusion in the study was a deliberate step, partly in order to increase the proportion of respondents with personal experience of implantable medical devices (and apply the hypothetical situation of needing to have hip replacement among those who are closer to the typical age of this intervention).However, only 33 respondents had hip replacement surgery in the sample.On the other hand, RA surgery is not yet widely used in the clinical practice, therefore, the population lacks its own experiences in general.AI plays an increasing role in the image analyses, however the radiological diagnosis is established and signed by a radiologist.Studies focused on specific patient groups could provide a better understanding of the impact of past experiences on preferences. In both WTP tasks, approximately one-third of the respondents were not willing to pay any money to secure their preferred procedure over the other option.This share is quite low compared to the results of Abdelaal and colleagues, who found in their survey that only less than one-tenth of participants were willing to pay for RA knee replacement surgery in a WTP task [20].Respondents who were not willing to pay might be less likely to stick with their choices (for various reasons) and, therefore, have lower WTP for advanced digital technologies.Affordability is a common bias factor in WTP [45], however we find it important to highlight that there were options to pay really small amounts to decrease the influence of financial status on stated preferences.Another important aspect in this context is that previous studies have shown that preferences for health technologies The constant term was excluded from the analysis are fundamentally influenced by their reimbursement scheme [46,47].However, in our study, we were not able to assess how WTP is related to reimbursement, as in the two hypothetical decision tasks in the survey, respondents were not provided with information about the financing scheme through which they would access the technology.Nevertheless, we believe that this could be the subject of future research, as knowledge of how these factors are associated with social preferences could help both financing bodies and health policymakers in decision-making about advanced digital health technologies. According to the descriptive analysis, WTP amounts were very similar in the conventional and RA hip replacement surgery subgroups.However, although fewer participants chose the AI option in the second task, a higher WTP was observed for the AI-made secondary opinion compared to the radiologist.In the subgroup analysis, we observed that respondents with higher education and income had higher WTP for their preferred choices in all cases.We also observed a weak positive correlation between income and WTP.These results are consistent with those already reported in the literature regarding the WTP method.Education has been positively correlated with WTP, and income is also known to influence WTP, as people with higher incomes can afford to spend more [45]. WTP may be confounded by a number of factors that do not show any correlation in an univariate analysis [48].Therefore, a multiple regression analysis was conducted to further analyse and understand the relationship between participants' WTPs, preference for available methods, and background characteristics, including income.In contrast to the descriptive analysis, in the regression analysis, we found significantly higher WTP for those who opted for the conventional surgery.However, in the radiological image evaluation task, WTP was not significantly associated with the preferred method.In terms of patient characteristics, many studies have reported that participants' gender, age, marital status, education, place of residence, employment and income are significant determinants of WTP for health services [48].In contrast, we identified income alone as a factor associated with WTP in both tasks.Higher age was associated with higher WTP for the hip replacement surgery task, while the association between electronic health literacy and WTP was only significant for the radiological image assessment task.At the same time, no association was found along other patient characteristics, including those described in the literature as commonly correlated with WTP [48]. Limitations of our study need to be considered for the interpretation of the results.The hypothetical healthcare situations could be perceived differently by participants in terms of disease severity, possible outcomes and risks of interventions, general knowledge regarding RA and AI medical technologies.The difference may also be explained by the fact that the radiological image assessment process is conducted hiddenly from the patient in the background.Surgical intervention, on the other hand, is a more concrete activity of which the patient might have direct experience, either from his or her own medical history or from that of relatives and acquaintances.Assessment of these factors would be valuable to better understand the results and the reasons why fewer participants opted for the AI-made radiological image analysis and diagnosis compared to the RA hip surgery.Another limitation is that WTP is likely to be influenced by the design itself, thus many factors need to be considered, such as the severity of the situation described.Since we used the stated preference method to explore preferences, real word choices might be different.A further limitation is that the sampling frame of the survey was treated as confidential information, and therefore, the authors had no information on the response rate.However, we believe that this does not affect the reliability of the results due to the large sample size and the quota sampling used to ensure representativeness in terms of sex, age, education and type of residence.We used a cutoff point of 40 years as inclusion criteria, hence our study does not provide evidence on the preferences of younger adults.A limitation related to the analysis was that the continuous variables included were not normally distributed.However, to the best of our knowledge, the violation of this assumption has only little or no effect on the validity of the results, especially given the large sample size of the study [49][50][51][52].In addition, we also acknowledge that, despite their significance, the correlation coefficients observed in the study were small, indicating a weak association of WTP with age, income, eHEALS and the EQ VAS.In addition, the R-squared values in the regression analysis were also low.We believe this is due to the complex and multifactorial nature of WTP, which is influenced by a number of factors.In line with this, previous studies published in the literature have also found weak associations and low R-squared values in linear regression analyses similar to our study [48]. Nonetheless, our research has a number of strengths as well.It was conducted on a large sample, representative of the general adult population of Hungary, thereby filling an important knowledge gap regarding social preferences on RA and AI-based medical technologies.We tested two different advanced digital technologies in parallel, which allowed a better understanding of the contextual dependency of preferences.Our study offers insight into the relationship between individuals' self-reported electronic health literacy and their views on RA and AI medical technologies, which has been an underexplored area so far.For future research, we suggest investigating the generalisability of our findings with respect to other RA or AI-driven medical technologies, validating our results with other valuation methods, exploring the determinants of preferences further (e.g., impact of reimbursement context, free choice of physician, age-dependency of choices and WTP) and comparing hypothetical and real-world choices in specific patient groups.We encourage pre-post studies to assess the effects of eHealthtargeted public health campaigns on preferences of AI-based technologies, and re-evaluation of our findings in new studies when the penetration of AI technologies has become higher in healthcare. Conclusions The results of this study suggest that there is considerable societal openness towards advanced robot-assisted and AI-based health technologies.A large number of participants expressed their desire to use them, who tended to be older and have higher income.However, no clear differences were observed in the strength of preferences as measured by the WTP method.Our results suggest correlation between the WTP and both the education and the income levels, although the multiple regression analysis revealed a clear relationship between WTP and income only.The regression also showed that respondents who opted for the conventional surgery had higher WTP, but no difference was observed for the radiological image assessment.Our findings are of considerable value to clinicians involved in the provision of care, who can gain insights into societal attitudes towards and acceptance of the emerging new advanced technologies, helping them to make therapeutic decisions and design clinical interventions.Technology developers may also benefit from the observations of this research, as knowledge of social preferences is essential in determining the direction of technology development.Prospective studies are encouraged in the future to better understand how individual factors influence WTP and to investigate their causal relationship. WTP willingness to pay AI artificial intelligence RA robot-assisted eHEALS electronic health literacy scale Table 1 Characteristics of the sample Table 2 Distribution of participants across willingness to pay categories Conversion: 1 EUR = 357.49HUF *There was a missing value for one respondent **The 0 category represents no willingness to pay Table 3 Willingness to pay by socio-demographic subgroupsDifferences in WTP were tested with Kruskal-Wallis tests a 'Do not know' and 'Do not want to answer' responses were treated as missing values and excluded from the analysis Table 4 Results of the regression analysis
2024-03-31T06:19:00.172Z
2024-03-27T00:00:00.000
{ "year": 2024, "sha1": "1a8585d979d814e90e5bfa6a72235a0fd6864681", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4972c0c64173ea8b289bf1102fabf42f307e954d", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
55309549
pes2o/s2orc
v3-fos-license
The Model of Odonate Diversity Relationship with Environmental Factors Based on Path Analysis This study aims to analyze and describe the relationship between altitude, aerial variables (temperature, light intensity, humidity), water qualities (water temperature, pH, BOD, COD, DO, TOM, and water velocity), and vegetation with the diversity of Odonate assemblages. Odonate samplings were conducted at six survey sites based on altitude and vegetation characteristics. Measurement of altitude, aerial variables, water qualities and vegetation characteristics were replicate in the first day and third day. Analysis of correlations of all environmental factors with the odonate diversity was done through structural equation model using Partial Least Squares (PLS), Open source Smart Software and Microsoft Excel. The aerial variables and water qualities affected indirectly on odonate diversity. The aerial variables directly or with interaction to other factor affected the water qualities and vegetation characteristics. The vegetation characteristics directly influenced to odonate diversity. Water flow affected water quality, light intensity affected the aerial, while morning period observation affected the odonate diversity. Predictive relevance (Q2) for a model designed amounted to 99.95%, while the rest of 0.05% are explained by other variables. Introduction Indonesia is a tropical country with diverse natural resources and house of a huge number of insect species (Darnaedi and Noerdjito, 2007).One of the insect groups in this country that gained public attention from entomologist is Odonata (Orr, 2004).This grup plays important roles in creating a balance in the food chain of the agricultural area.In addition, their sensitivity to environmental fluctuation makes odonate species excellent biological indicators of environmental conditions.Several studies have reported that dragonflies are often successfully used as an indicator for water quality (Clark and Samways, 1996;Samways et al., 1996).Based on the research report, there are more than 5680 odonate species in the world that have been identified (Kalkman et al., 2008).However, there are not available recent data of number of species in Indonesia. Odonata is the insect group that morphologically varied from the color, body size, and shape of the wing.There are two group of Odonata, dragonflies and damselflies, both are very common in the environment around with different range of landscape use.Many species have a narrow distribution range, and habitat specialists, including dragonflies that inhabit the alpine mountains, watersheds and river waterfall in the tropical rain forest.The highest diversity is found in flowing waters in the tropical rain forest, Oriental and Neotropical regions most specious. Dragonflies are predators both the nymph and imago phases.The adults are a type of predator and active natural enemies in the agriculture and plantation habitats.Odonate adults and nymphs are also preyed upon by a variety of organisms among other birds, bats, reptiles and fishes.The young nymph stages feed on Protozoas, mosquito larvae, small crustaceans (i.e.Daphnia sp., Cyclops sp.) and other small animals.Larvae experiencing feed on tadpoles, small fishes, water beetles, and other odonate nymps of different species as well as the same species (cannibalism). In scientific publications, research journals to review the diversity of information and the role of dragonflies, especially in Indonesia is very limited.Studies on the diversity of dragonfly either merely as a knowledge or other utilization in Indonesia still needs to be developed.Dragonflies are one of the types of insects wealth of Indonesia who rarely get public attention, both from the point role or utilization.The existence and type of their counterparts in Indonesia have not been fully identified. In their life cycle adults lay eggs in less contaminated water, later the eggs hatching into nymphs that live in the water.Odonate is a group of insects that undergo incomplete metamorphosis.According to Theischinger & Hawking (2006) in the life cycle of the odonate species, there are three phases of development: egg, nymph and imago.Egg and nymph stages of development occur in the water, while imago lives in aerial Environmental degradation as a result of land use changes, pollution, and the use of pesticides have pushed population of Odonata.If such condition persists in the future, it will lead to the breakdown of the food chain in an ecosystem, especially agricultural ecosystems, which in turn will be followed by explosions of pests and the extinction of several species.Such conditions can reduce the existing biodiversity including Odonata. Cordoba (2008), stated that during the period 1994-2007 in scientific publications, especially insect, Odonata is one of seven insects orders which gained remarkable publication.Nevertheless, the frequency of publication still ranks sixth named after the other order as follows: Hymenoptera, Lepidoptera, Coleoptera, Diptera, and Orthoptera.However, studies evaluating the relationship of environmental factors with a dragonfly is few.The study of environmental factor relations with dragonfly generally uses a multivariate approach to identify the factors that most influence on the distribution of dragonfly.The analysis technique used is usually using current correlation analysis (Hornung and Rice, 2003) or correspondence analysis (Samways and Steytler, 1996).In this study, the relationship of environmental factors with odonate group of species described using path analysis so that the relationship between the parameters are explained in more detailed.The focus of this research on environmental factors, while the compositional and diversity of dragonflies are not discussed because it has been published in other studies. Materials and methods The research was conducted in October 2011 to January 2012.The research was located in Brantas River Watershed (BRW) and surrounded area in Malang.The points of observation location chosen for the study site at the Brantas River Basin is in the upstream area and the middle area of Batu in Malang.Sampling was conducted at four sites, there were 1. Wendit Residential Area (WR) This site is geographically located at 7o57'S and 112o 40'E, 435 m in altitude.The river widths at this location range from 10 to 30 meters, with muddies riverbed.Existing vegetation around the river flow is dominated by kale plants. Wendit paddy fields (WP) This site is located at 500 meters northern of WR.The river widths at this location range from 10 to 20 meters, with muddies riverbed.Existing vegetation around the river flow is kale and rice paddy field. City Center Park (CC) This site is geographically located at 7o58'S and 112o 38'E, 441 m in altitude, represent a location close to human activities such as residential area, traffic area and local tourist destination.The river widths at this location range from 10 to 20 meters, with rocky and muddies riverbed.Existing vegetation around the river flow is grass and shrubs. Sengkaling (SK) This site is geographically located at 7o54'S and 112o 35'E, 584 m in altitude.The river widths at these locations range from 10 to 20 meters, with rocky and muddies riverbed.Existing vegetation around the river flow is grass and shrubs. Talun Coban (CT) This site geographically located at 7o45'S and 112o 30'E, 1295 m in altitude.The river widths at these locations range from 5 to 30 meters, with rocky and sandy riverbed.Existing vegetation around the river flow is shrubs and tree stands.6. Sumber Brantas (SB) This site geographically located at 7o45'S and 112o 32'E, 1970 m in altitude.The river widths at these locations range from 1 to 2 meters, with a rocky riverbed.Existing vegetation around the river flow is grass and shrubs.Measurement of environmental factors at each point of observation, performed at each time of observation, around the discovery of a dragonfly on a plot of observations that have been determined.Measurement of environmental factors included altitude, aerial abiotic variables (temperature, light intensity, humidity), water qualities (water temperature, pH, BOD, COD, DO, TOM, and the velocity of water), and vegetation were done in the study sites.All measures were duplicate on the first and third day.Analysis of correlations of all environmental factors with the odonate diversity was done through structural equation model using Partial Least Squares (PLS), Open source Smart Software and Microsoft Excel. Result The results showed that the air temperature, water temperature, light intensity and air humidity were highest in WP, while the lowest temperature was observed in SB.The pH level of the water was highest in SB, while the lowest was in SK.The BOD and COD levels were highest in SK while the lowest was in SB.The highest DO was found in SK while the lowest at WR.The TOM has highest level at CC, while the lowest was at SB.The highest flow velocity was found in WP, while the lowest in SK (Table 1 Indicators for the water quality variable were presented as follows: the temperature of the water (k1), pH (k2), BOD (k3), COD (k4), DO (k5), TOM (k6), flow velocity (k7).Variable geography has an altitude indicator (g).Indicators for aerial variables were temperature (i1), light intensity (i2), humidity (i3).Indicators for vegetation variable were the number of species and diversity index.Indicators for diversity variable was the number of odonate species in the morning (d1), the number of odonate species in the afternoon (d2), the odonate diversity in the morning (d3) and the odonate diversity in the afternoon (Figure 1).Statistical analysis to design a model of the relationship between environmental factors, vegetation and odonate diversity (Figure 1) indicated that the variable of the water qualities [temperature (k1), pH (k2), BOD (k3), COD (k4), DO (k5), TOM (k6)], aerial variables (temperature), vegetation variable (number of species (v1), odonate diversity [number of odonate species in morning period (d1), number of odonate species in afternoon period (d2), and diversity index dragonfly in afternoon period (d3)] did not interact significantly (p> 0.05).Furthermore, the model was improved by excluding indicators that had no significant effect.The intermediate model of the relationships explained the interaction between altitude (g), the sunlight intensity (i2), humidity (i3), vegetation diversity, water flow (k7), and the odonate diversity on morning period (d3). Based on test results Goodness of Fit on structural model of the inner model using predictive relevance (Q2) and the coefficient of determination (R2) each endogenous variable, can be explained as follows: 1.A geography variable (altitude) interacted with the aerial result R2 values at 0.881, meaning that the aerial was affected by geography amounted to 88.1%.2. The altitude and aerial interacted with vegetation at 0.174, meaning that the vegetation was influenced by the geography and aerial amounted to 17.4%.3. The altitude, aerial and vegetation interacted with the water quality by 0.815.This meant vegetation was affected by the altitude, aerial and water quality at 81.5%.4. The altitude, aerial variables, water quality and vegetation interact with a diversity of odonate species by 0.974.This meant the odonate diversity was influenced by geography (altitude), aerial variables, water quality and vegetation amounted to 97.4%.From the calculations, the predictive relevance (Q2) of 99.95%, so the final model was fit (greater than 80%).The remaining 0.05% was explained by other variables (Figure 2).Based on the model, of the relationship between environmental variables, vegetation and odonate diversity (Figure 3), explained several correlation as follows: (i) the significant indicator on water quality was flow velocity, (ii) the significant indicator on geography variable was the altitude, (iii) the significant indicators on the aerial variables was the intensity of light and humidity, (iv) the significant indicator on the odonate diversity is a odonate diversity in the morning period.A geography variable (altitude) had direct effect on the water quality, the aerial variable, vegetation and odonate diversity.The altitude interacted with water quality, the aerial variable and vegetation affecting the odonate diversity.The aerial variable and water quality did not affect directly on odonate diversity but those affected indirectly trough the interaction with vegetation.The vegetation affected directly to odonate diversity. Discussion Several studies showed that the structural model dragonfly diversity clarifies the interaction between several variables.Each variable consists of several indicators.Water quality variables, geography, aerial variables, and vegetation supposed to influence the diversity of dragonflies.The similarity of structure and composition of odonate species found in each study sites due to their adaptation to the environment and these interactions might affect the diversity of odonate species.Based on the results of statistical testing, the environment variables observed in the field with their interactions determining the diversity of odonate species were the altitude, aerial, water quality, and vegetation around the site observation.Krebs (2001), states that the density and abundance of populations in an ecosystem is influenced by very complex factors, including competition among species, food availability and physiological stress environment.According to Hellawell the state of habitat and aquatic environment may This research showed that the water quality parameters have indirect effect on odonate species by interaction with vegetation.Habitat parameters that affect the growth of eggs and nymphs of dragonflies are water temperature, dissolved oxygen, pH, flow velocity, conductivity, type of substrate and vegetation around its habitat (Corbet, 1999) as well as the availability of food resources (Basset, 1995).Yaherwandi (2005), stated that the wealth of plant diversity forms a better community structure so that the habitat of a region capable of providing a variety of resources such as alternative host, the source of food for the survival and diversity of certain insects. Other studies have shown a strong connection between the diversity of plants in the marsh area with a population of odonate species (Hornung and Rice, 2003) because the health or quality of the swamp greatly affects the diversity of odonate species.These results are consistent with previous studies which stated that the reproductive success of odonate species influenced by the structure, diversity and richness of vegetation that can be associated with the requirements for the process of oviposition on substrate specific (Lenz,199l), the vegetation gives directions guide the odonate species to site selection to perform the reproduction process (Buskirk & Sherman, 1985;MacKinnon & May, 1994). The concentration of dissolved oxygen in the water (DO) and temperature directly affect the abundance of odonate species larvae.A level of dissolved oxygen in the water affects the behavior, metabolism and survival of larvae of Odonata (Corbet, 1999;Hofmann and Mason, 2005).Variations in oxygen availability in the lacustrine zone (low oxygenation) and lotic zone (high oxygenation) environment determines the species diversity of Odonata (Simmons and Voshell, 1978;Corbet, 1999).Water temperature also has an effect on the abundance and development of odonate larvae (Corbet, 1999).Places with high temperatures, such as hot springs tend to decrease the abundance and diversity of odonate species (Corbet, 1999).The importance of abiotic factors such as water temperature and dissolved oxygen levels for the presence of larvae of several species such as Enallagma sp., Homeura sp. and Telebasis sp. has also known to be very sensitive to variations in water dissolved oxygen concentration and temperature throughout the year (Hornung and Rice, 2003). Conclusion The aerial variables directly or with interaction to other factor affected the water qualities and vegetation characteristics.The vegetation characteristics directly influenced to odonate diversity.Water flow affected water quality, light intensity affected the aerial variables, while morning period observation affected the odonate diversity.Water flow affected water quality, light intensity affected the aerial, while morning period observation affected the odonate diversity. Figure 1 . Figure 1.Hypothetical structural model of environmental factors, vegetation and odonate diversity relationships in Sub DAS Brantas Area final construction of the model and the t-test results of inner model, the intermediate model was converted into a final model of relationship of environmental factors, vegetation and odonate with indicators that affect directly and indirectly (Figure 3) Figure 2 .Figure 3 . Figure 2. Intermediate structural model of environmental factors, vegetation and odonate diversity relationships in Sub DAS Brantas Area of odonate species from time to time.Furthermore, Silsby (2001), stated that the presence of Odonata in a region largely determined by the quality of the water environment in these places.During the development of the life, Odonata lives in two different places.During the phase of eggs and nymphs, they live in aquatic environments, while the adult stage lives on land. ).
2018-12-12T21:47:59.585Z
2016-09-05T00:00:00.000
{ "year": 2016, "sha1": "c961f54b13e708adc6d88d7d59ea76752d7c3762", "oa_license": "CCBYSA", "oa_url": "http://ejournal.uin-malang.ac.id/index.php/bio/article/download/4068/5548", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c961f54b13e708adc6d88d7d59ea76752d7c3762", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
53066362
pes2o/s2orc
v3-fos-license
On lengths on semisimple groups We prove that every length on a simple group over a locally compact field, is either bounded or proper. Introduction Let G be a locally compact group. We call here a semigroup length on G a function L : G → R + = [0, ∞[ such that • (Local boundedness) L is bounded on compact subsets of G. We call it a length if moreover it satisfies • (Symmetricalness) L(x) = L(x −1 ) for all x ∈ G. We do not require L(1) = 1. Note also that local boundedness weakens the more usual assumption of continuity, but also include important examples like the word length with respect to a compact generating subset. See Section 2 for further discussion. Besides, a length is called proper if L −1 ([0, n]) has compact closure for all n < ∞. Definition 1.1. A locally compact group G has Property PL (respectively strong Property PL) if every length (resp. semigroup length) on G is either bounded or proper. We say that an action of a locally compact group G on a metric space is locally bounded if Kx is bounded for every compact subset K of G and x ∈ X. This relaxes the assumption of being continuous. The action is bounded if the orbits are bounded. If G is locally compact, the action is called metrically proper if for every bounded subset B of X, the set {g ∈ G|B ∩ gB = ∅} has compact closure. (i) G has Property PL; (ii) Any action of G on a metric space, by isometries, is either bounded or metrically proper; (ii') Any action of G on a metric space, by uniformly Lipschitz transformations, is either bounded or metrically proper; (iii) Any action of G on a Banach space, by affine isometries, is either bounded or metrically proper. When G is compactly generated, Property PL can also be characterized in terms of its Cayley graphs. Proposition 1.3. Let G be a locally compact group. If G has strong Property PL (resp. Property PL), then for any subset S (resp. symmetric subset) generating G as a semigroup, either S is bounded or we have G = S n for some n. If moreover G is compactly generated, then the converse also holds. I do not know if the converse holds for general locally compact σ-compact groups. Also, I do not know any example of a locally compact group with Property PL but without the strong Property PL. If a locally compact group is not σ-compact, then it has no proper length and therefore both Property PL and strong Property PL mean that every length is bounded. Such groups are called strongly bounded (or are said to satisfy the Bergman Property); discrete examples are the full permutation group of any infinite set, as observed by Bergman [Be] (see also [C]). However the study of Property PL is mainly interesting for σ-compact groups, as it is then easy to get a proper length (it is more involved to obtain a continuous proper length; this is done in [St], based on the Birkhoff-Kakutani metrization Theorem). The main result of the paper is. Theorem 1.4. Let K be a local field (that is, a non-discrete locally compact field) and G a simple linear algebraic group over K. Then G K satisfies strong Property PL. This result was obtained by Y. Shalom [Sh] in the case of continuous Hilbert lengths, i.e. lengths L of the form L(g) = gv − v for some continuous affine isometric action of G on a Hilbert space, with an action of a group of K-rank one. Some specific actions on L p -spaces were also considered in [CTV]. My original motivation was to extend Shalom's result to actions on L p -spaces, but actually the result turned out to be much more general. However, even for isometric actions on general Banach spaces, we have to prove the result not only in K-rank one, but also in higher rank, in which case the reduction to SL 2 requires some careful arguments. The first step is the case of SL 2 (K); it is elementary but it seems that it has not been observed so far (even for K = R). Then with some further work, and making use of the Cartan decomposition, we get the general case. In the case of rank one, this second step is straightforward; this was enough in the case of Hilbert lengths considered in [Sh] in view of Kazhdan's Property T for simple groups of rank ≥ 2 (which states that every Hilbert length is bounded), but not in general as there always exist unbounded lengths. Remark 1.5. It is necessary to consider lengths bounded on compact subsets. Indeed, write R as the union of a properly increasing sequence of subfields K n . (For instance, let I be a transcendence basis of R over Q, write I as the union of a properly increasing sequence of subsets I n , and define K n as the set of reals algebraic over Q(I n ).) If G = G(R) is a connected semisimple group, then ℓ(g) = min{n|g ∈ G(K n )} is an unbounded symmetric (and ultrametric) nonlocally bounded length on G. However ℓ is not bounded on compact subsets and {ℓ ≤ n} is dense provided G is defined over K n , and this holds for n large enough. Also, if G = G(C) is complex and non-compact, if α is the automorphism of G induced by some non-continuous field automorphism of C, and if ℓ is the word length with respect to some compact generating set, then ℓ • α is another example of a non-locally bounded length neither bounded nor proper. Finally, it is convenient to have a result for general semisimple groups. Proposition 1.6. Let K be a local field and G a semisimple linear algebraic group over K. Let L be a semigroup length on G(K). Then L is proper if (and only if ) the restriction of L to every non-compact K-simple factor G i (K) is unbounded. This proposition relies on Theorem 1.4, from which we get that L is proper on each factor G i (K), and an easy induction based on the following lemma, of independent interest. Lemma 1.7. Let H × A be a locally compact group. Suppose that H = G(K) for some K-simple linear algebraic group over K. Let L be a semigroup length on G, and suppose that L is proper on H and A. Then L is proper. Here are some more examples of PL-groups, beyond semisimple groups. Proposition 1.8. Let K be a compact group, with a given continuous orthogonal representation on R n for n ≥ 2, so that the action on the 1-sphere is transitive (e.g. K = SO(n) or K = SU(m) with 2m = n ≥ 4). Then the semidirect product G = R n ⋊ K has strong Property PL. Proposition 1.9. Let K be a non-Archimedean local field with local ring A. Then the group K ⋊ A * has strong Property PL. Note that the locally compact group K ⋊ A * is not compactly generated. Discussion on lengths We observe here that our results actually hold for more general functions than lengths. Namely, call a weak length a function G → R + which is locally bounded and satisfies (Control Axiom) There exists a non-decreasing function φ : R + → R + such that for all x, y, we have L(xy) ≤ φ(max(L(x), L(y)) for all x, y. Note that every semigroup length satisfies the control axiom with φ(t) = 2t. Besides, if L, L ′ are two weak lengths on G, say that L is coarsely bounded by L ′ and write L L ′ if L ≤ u • L ′ for some proper function u : R + → R + , and that L and L ′ are coarsely equivalent, denoted L ≃ L ′ , if L L ′ L. Here is a series of remarks concerning various definitions of lengths. 2. (Continuity) A construction due to Kakutani allows to replace any length by a coarsely equivalent length which is moreover continuous (see [Hj,Theorem 7.2]). and (L proper) ⇒ (L ′ proper), but L ′ can be proper although L is not. In particular, they are not necessarily coarsely equivalent; when it is the case, L is called coarsely symmetric. For instance, the semigroup word length in Z with respect to the generating subset {n ≥ −1} is not coarsely symmetric. It is well-known that a locally compact group is σ-compact (i.e. a countable union of compact subsets) if and only if it possesses a proper length. Trivially, this is a sufficient condition. Let us recall why it is necessary: let (K n ) be a sequence of compact subsets covering G; we can suppose that K 1 has non-empty interior. Define by induction M 1 = K 1 and M n as the set of products of at most 2 elements in M n−1 ∪ K n . Then L(g) = inf{n|g ∈ M n } satisfies the quasi-ultrametric axiom L(xy) ≤ max(L(x), L(y)) + 1 and is symmetric and proper. Elementary results on lengths Lemma 3.1. Let G be a locally compact group and K a compact normal subgroup. Then G has Property PL if and only if G/K has Property PL. Proof. The forward implication is trivial. Conversely if G/K has Property PL and L is a length on G, then L ′ (g) = sup k∈K L(gk) is a length as well, so is either bounded or proper, and L ≤ L ′ ≤ L + sup K L, so L is also either bounded or proper. Lemma 3.2. Suppose that G has three closed subsets K, K ′ , D with K, K ′ compact, and G = KDK ′ . Then a length on G is bounded (resp. proper) if and only its restriction to D is so. Proof. Suppose that a length L on G is proper on D. Let (g n ) in G be bounded for L. Write g n = k n d n ℓ n with (k n , d n , ℓ n ) ∈ K × D × K ′ . Then L(d n ) is bounded. As L is proper on D and bounded on K and K ′ , it follows that (d n ) = (k −1 n g n ℓ −1 n ) is bounded; therefore (g n ) is bounded as well. So L is proper on all of G. The case of boundedness is even easier. As a consequence we get Lemma 3.3. Let G be a locally compact group and H a cocompact subgroup. If H has (strong) Property PL, then G also has (strong) Property PL. The converse is not true, even when H is normal in G, in view of Proposition 1.8. Proof of Propositions 1.8 and 1.9. Let L be a semigroup length on G. If L is not proper, then there exists an unbounded sequence (a i ) in R n with L(a i ) ≤ M for some M < +∞ independent on i. Using transitivity of K, if M ′ = M + 2 sup K L, then for every i, the length L is bounded by M ′ on the sphere S(a i ) of radius a i centered at 0. As every element of the ball D(a i ) of radius a i centered at 0 is the sum of two elements of S(a i ), it follows that L is bounded by 2M ′ on B(a i ). As a i → ∞, L is bounded on R n , and hence L is bounded on all of G. Proposition 1.9 is proved in an analogous way, using the trivial fact that in any non-Archimedean local field K, for any m ≥ n, any element of valuation m is sum of two elements of valuation n. Proof of Proposition 1.3. Define L(g) as the least n such that g ∈ S n and observe that L is bounded on compact subsets because by the Baire category theorem, S k has non-empty interior for some k. If S is symmetric, then L is a length. So the assumption implies that either L is proper (and hence S is bounded) or L is bounded (and hence S n = G for some n). Conversely, suppose that G is compactly generated and the condition holds. Let L be a non-proper semigroup length (resp. length) on G. Set S n = L −1 ([0, n]), which is symmetric if L is a length. By non-properness, there exists n 0 such that S n 0 is unbounded. As G is compactly generated, S n generates G for some n ≥ n 0 . Then, by assumption, every element of G is product of a bounded number of elements from S = S n . By subadditivity, this implies that L is bounded on G. Proof of Proposition 1.2. (ii')⇒(ii) is trivial. (i)⇒(ii') Let G act on the nonempty metric space by C-Lipschitz maps, and define L(g) = d(x 0 , gx 0 ) for some x 0 in X. Then L satisfies the inequality L(gh) ≤ L(g) + CL(h) for all g, h. By the remarks at the beginning of this Section 2, L is a weak length, so is coarsely equivalent to a length. So L is either proper or bounded. (ii)⇒(i) This follows from the fact that any length vanishing at 1, is of the form d(x 0 , gx 0 ) for some isometric action of G on a metric space, and 1. in Section 2. Of course (ii) implies (iii). The converse follows from the construction in [NP,Section 5]: every metric space X embeds isometrically into an affine Banach space B(X), equivariantly, i.e. so that any isometric group action on X extends uniquely to an action by affine isometries on B(X). Lengths on semisimple groups Let us now proceed to the proof of Theorem 1.4. Lengths on the affine group Let K be a local field, and D a cocompact subgroup of K * . Proposition 4.1. Let L be a symmetric length on K ⋊ D. If L is non-proper on D, then L is bounded on K. Proof. Fix W a compact neighborhood of 1, so that L is bounded by a constant M on W . Suppose that the length L is not proper on D: there exists an unbounded sequence (a n ) in D such that L(a n ) is bounded by a constant M ′ . Let u be any element of the subgroup K. Replacing some of the a n by a −1 n if necessary, we can suppose that a n ua −1 n → 1 (we use that L is symmetric). Then for n large enough, w n = a n ua −1 n ∈ W , on which L is bounded by M. Writing u = a −1 n w n a n , we obtain that L(u) ≤ M + 2M ′ . Remark 4.2. Proposition 4.1 is false for semigroup lengths. Indeed, the subset {(x, λ) ∈ K ⋊ D : |x| ≤ 1, 0 < |λ| ≤ M} generates K ⋊ D provided M is large enough; the corresponding semigroup word length is obviously non-proper, and is easily checked to be unbounded on K. Remark 4.3. Proposition 4.1 still holds if the normal subgroup K is replaced by a finite-dimensional K-vector space, D acting by scalar multiplication. Case of SL 2 Denote G = SL 2 (K), and D, U, and K the set of diagonal, unipotent, and orthogonal matrices in G. Let L be any semigroup length on G. We have a Cartan decomposition G = KDK, which implies by Lemma 3.2 that boundedness and properness of the length L on G can be checked on D. The matrix M = 0 −1 1 0 conjugates any matrix in D to its inverse. It follows that L(g) + L(g −1 ) is equivalent to L on D, and hence on all of G. In other words, we can suppose that the length L is symmetric. So if L is non-proper, then L is bounded on U by Proposition 4.1. Similarly, L is bounded on U t , the lower unipotent subgroup of G (this also follows from the fact that U t is conjugate to U by M). As every element of G is product of four elements in U ∪ U t , we conclude that L is bounded on G. Reduction to G simply connected Let H → G be the (algebraic) universal covering of G. Then the map H K → G K has finite kernel and cocompact image. Therefore, by Lemmas 3.3 and 3.1 strong Property PL for G K follows from strong Property PL for H K . So we can assume G algebraically simply connected, and it will be convenient and harmless to identify G with G K . Let d ≥ 1 be the K-rank of the simply connected K-simple group G and D be a maximal split torus in G. The Cartan decomposition tells us that there exists a compact subgroup K of G such that G = KDK (in the case of Lie groups, see [He,Chap. IX 1.]; in the non-Archimedean case, see Section 4.4]). So the proof consists in proving that if a semigroup length L on G is not proper on D, then it is bounded. Rank one This case is not necessary for the general case but we wish to point out that then the conclusion is straightforward. Indeed, if G is such a group, then its subgroup D is contained in a subgroup isomorphic to SL 2 (K) or PSL 2 (K) and therefore every length on G is either proper or bounded on D. General case Remains the case of higher rank groups. Let W be the relative Weyl group of G with respect to D, that is normalizer of D (modulo its centralizer). Let D ∨ ≃ Z d be the group of multiplicative characters of D, that is K-defined homomorphisms from D ≃ K * d to the multiplicative group K * . Then by [BoT,Corollary 5.11], the relative root system is irreducible, so that by [Bk,Chap. V.3,Proposition 5(v)], the action of W on D ∨ ⊗ Z R is irreducible. If u is a function D → R + , we say that a sequence (a n ) in D is u-bounded if (u(a n )) is bounded. Let Γ ⊂ D ∨ be the set of α ∈ D ∨ such that every L-bounded sequence (a n ) in D is also v • α-bounded, where v(λ) = log |λ| by definition. Then Γ is a subgroup of D ∨ . It is easy to check that D ∨ /Γ is torsion-free and that Γ is W -invariant. On the other hand, by irreducibility, either Γ = {0} or Γ has finite index in D ∨ . As D ∨ /Γ is torsion-free, this means that either Γ = {0} or Γ = D ∨ . Suppose that L is not proper. Then there exists an sequence (a n ) in D which is L-bounded but not bounded. So there exists α ∈ D ∨ such that v • α(a n ) is unbounded. It follows that Γ = D ∨ . So Γ = {0}. In particular, for every relative root α, there exists a sequence (a n ) which is L-bounded but not α-bounded. The argument of SL 2 implies that L is bounded on U α , and therefore for any root α, L is bounded on D α = [U α , U −α ]. As any element of D is a product of d elements in D α , we obtain that L is bounded on D. Auxiliary results Proof of Lemma 1.7. By the argument of Lemma 4.3, we can suppose that G is simply connected. Let L be a length on H × A, and suppose that L is proper on both H and A. Suppose that L is not proper. Then there exists a sequence (h n , a n ) tending to infinity in H × A so that L(h n , a n ) is bounded. As L is bounded on compact subsets and is proper in restriction to the factor A, the sequence (h n ) tends to infinity. By Lemma 5.1 below, there exist bounded sequences (k n ), (k ′ n ) and u in G(K) such that, writing d n = k n h n k ′ n , the sequence of commutators ([d n , u]) is unbounded. Note that L(d n , a n ) is bounded as well. Suppose that L(d −1 n , a −1 n ) is bounded (this holds if L is assumed coarsely symmetric). Now L([(d n , a n ), (u, 1)]) = L([d n , u], 1) is bounded. But this contradicts properness of the restriction of L to H. If L(d −1 n , a −1 n ) is not assumed bounded, we can go on as follows. First note that the proof of Lemma 5.1 provides (d n ) as a sequence in the maximal split torus D, and we assume this. If W denotes the Weyl group of D in H, then for every d ∈ D the element w∈W wdw −1 of D is fixed by W , so is trivial. Now the sequence L w∈W (w, 1)(d n , a n )(w −1 , 1) = L(1, a |W | n ) is bounded. Therefore, by properness on {1} × A, the sequence κ n = a |W | n is bounded. Thus, the sequence L(1, κ −1 n ) is bounded. Now the sequence L   w∈W −{1} (w, 1)(d n , a n )(w −1 , 1) is bounded in turn, so L(d −1 n , a −1 n ) is bounded, and this case is settled. Lemma 5.1. Let K be a local field and G a simple simply connected linear algebraic group over K. Let (g n ) be an unbounded sequence in G(K). Then there exist bounded sequences (k n ), (k ′ n ) and u in G(K) such that the sequence of commutators ([k n g n k ′ n , u]) is unbounded. Proof. By the Cartan decomposition (see Paragraph 4.3), we first pick (k n ) and (k ′ n ) such that a n = k n g n k ′ n belongs to D, the maximal split torus. There exists one weight α such that α(a n ) is unbounded. Fix u in the unipotent subgroup G α . Then a n ua −1 n is unbounded, so [a n , u] is unbounded as well. Proposition 1.6 follows from Lemma 1.7 when G is a direct product of simple groups. The general case follows by passing to the (algebraic) universal covering G of G, asG(K) maps to G(K) with finite kernel and cocompact image.
2016-12-03T18:34:30.000Z
2009-06-01T00:00:00.000
{ "year": 2016, "sha1": "28d51f8defc5e7c6c0ee85ebc8f4f78e327e8977", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.01001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "28d51f8defc5e7c6c0ee85ebc8f4f78e327e8977", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250541953
pes2o/s2orc
v3-fos-license
ON INFORMATICS — Cryptocurrency investment is an investment instrument with high risk and greater advantage than other investment instruments. To make a big profit, investors need to analyze cryptocurrency investments to predict the purchase price. The highly volatile movement of cryptocurrency prices makes it difficult for investors to predict those prices. Data mining is the process of extracting large amounts of information from data by collecting, using data, the history of data relationship patterns, and relationships in large data sets. Support Vector Regression has the advantage of making accurate cryptocurrency price predictions and can overcome the problem of overfitting by itself. Polkadot is one of the cryptocurrencies that are often used as investment instruments in the world of cryptocurrencies. Polkadot cryptocurrency price prediction analysis using the Support Vector Regression algorithm has a good predictive accuracy value, including for Polkadot daily closing price data, namely with a radial basis function (RBF) kernel with cost parameters C = 1000 and gamma = 0.001 obtained model accuracy of 90.00% and MAPE of 5.28 while for linear kernels with parameters C = 10 obtained an accuracy of 87.68% with a MAPE value of 6.10. It can be concluded that through parameter tuning, the model formed has an accuracy value, and the best MAPE is to use a radial kernel basis function (RBF) with cost parameters C = 1000 and gamma = 0.001. The results show that the Support Vector Regression method is quite good if used to predict Polkadot cryptocurrencies. I. INTRODUCTION Cryptocurrency (cryptocurrency) is a digital or virtual currency designed as a medium of exchange [1]. Cryptocurrency comes from cryptography, which means secret code, and currency, which means currency. In other words, cryptocurrency is a virtual currency that is protected by a secret code. Cryptography is a method used to protect information and communication channels through the use of code. The use of cryptography makes the use of cryptocurrencies cannot be manipulated, and that is, cryptocurrency transactions cannot be falsified [2]. The recording of cryptocurrencies or cryptocurrencies is usually centralized in a blockchain system. Blockchain technology can, also called Distributed Ledger Technology (DLT), is a concept in which every participant or party who is a member of a distributed network has access rights to bookkeeping [3]. One cryptocurrency that is much in demand by investors or traders is polkadot. According to the coinmarketcap.com website, Polkadot is ranked in the top 10 of the world's coin market cap. With the number of coin market cap as much as $ 24,120,241,891 in 2022. This shows that polka dot is one of the best cryptocurrencies. Polkadot is a multichain network with a shard, meaning it can process many transactions on multiple chains in parallel ("para chain"). These parallel processing capabilities increase scalability. Polkadot was founded by the Web3 Foundation, a Swiss institution founded to facilitate a fully functional and user-friendly decentralized web, as an open-source project [4]. Crypto investing is a type of investment that offers a high return. Of course, followed by a high level of risk. Therefore, the provision of the right information is very useful in planning or designing a mature strategy to make decisions for every individual and business person in reducing risk and taking advantage [5]. One way that can be used is to predict the price of polka dots accurately. In making Polkadot price predictions, a machine learning method is needed to obtain the prediction results close to actual data. One of the algorithms in machine learning that can be used to perform Polkadot price prediction is the support vector regression algorithm. Support Vector Regression (SVR) is the development of a regression model from Support Vector Machines (SVM) that was originally used to describe classification problems [6]. In this study, SVR was applied for the Time Series data type, which in the context of data has the meaning of data in the form of a series of events or observations taken sequentially over time. SVR has been widely used for stock price forecasting and shows better performance than other algorithms, including ANN, where ANN has been widely used for forecasting processes including as a promising alternative for predicting stock prices, where ANN finds a solution in the form of a local optimal while SVR finds a globally optimal solution (Santosa, 2007). Therefore, based on the description above, in this study, the method used is Support Vector Regression (SVR) is used to predict the closing price of Polkadot. A. Cryptocurrency Simply put, cryptocurrency can be interpreted as a digital currency. Cryptocurrency is a method of creating virtual "coins" and providing them and securing ownership and transactions using cryptographic technology. Cryptography is simply a technique of protecting information by transforming it (e.g., encrypting information) into a format that cannot be read and can only be described by someone who has a secret key. Cryptocurrency has a decentralized nature which means transactions are conducted peer-to-peer from sender to recipient in the absence of intermediaries. Some well-known cryptocurrencies in Indonesia such as Bitcoin, Ethereum, Litecoin, Dash, Ripple, Bitcoin Cash, Bitcoin Gold, Zcash, Monero, Maker, Byteball, and others. Cryptocurrency is different from currencies in general because the transaction model commonly used by the general public is centralized [7]. Bitcoin is a technology developed as a payment medium as time goes on; more and more Bitcoin users make the Bitcoin exchange rate higher from time to time, so now Bitcoin is considered a digital asset or investment instrument. Every Bitcoin transaction is stored in a sealed block using a specific code based on scientific cryptography [8]. Bitcoin is one of the cryptocurrencies where a very secure cryptographic technique guarantees transactions. Bitcoin is a cryptocurrency that first appeared, and which is still popular today. The name Bitcoin becomes synonymous with that Blockchain alone. The presence of cryptocurrencies is the answer to the transaction needs of the digital times now that is easy, fast, transparent, and acceptable to both parties make transactions [9]. Centralized nature is exemplified in the transaction model that the community has generally used. For example, when someone wants to send some money to someone else, all he does is use banking services (ATM, Mobile Banking, or come directly to the relevant bank) and then transfer some money to the person's account number. The transaction is done through a bank intermediary and a trusted service. So, the process of money that is transferred goes to the bank first, then passed on to the recipient. The process is real-time, so the move does not feel long. However, because the process is through an intermediary, there is a reward to be paid, namely administrative costs [10]. While the decentralized nature means that no one mediates or third parties become intermediaries, transactions are conducted peer-to-peer from sender to receiver. All transactions recorded in computers on the network worldwide are called miners (miners who help secure and record transactions on the network). Miners earn commissions with the virtual money used, but not everyone can become a miner, as it takes special expertise with complex computational processing to solve the cryptography used. This is one of the reasons cryptocurrency miners generally use high-spec and specialized computers. This decentralized nature is the DNA of the Blockchain system., Blockchain becomes a platform that allows digital cryptocurrencies to be used to transact [11]. Polkadot is an open-source sharding multichain protocol that facilitates the delivery of a cross-chain data or any type of asset, not just tokens, thus allowing a wide variety of blockchains to operate with each other. Interoperability seeks to build a fully decentralized and private web controlled by its users and simplify the creation of new applications, institutions, and services. The Polkadot protocol connects public and private chains, unauthorized networks, oracles, and future technologies, thus enabling the standalone blockchain to share information and transactions without trust through Polkadot's 'relay chain'. Polkadot's native DOT token has three clear objectives: taking care of network governance and operations and creating a parallel chain through 'bonds' [12]. B. Machine Learning Machine learning is artificial intelligence or commonly called artificial intelligence (AI). The working system in artificial intelligence is made to resemble a working system in the human brain using computer algorithms. Machine learning is the ability of a computer to perform learning without having to explain or programmatically explicitly. A type of artificial intelligence that provides computers with the ability to learn from data without explicitly having to follow programmatic instructions. The characteristic of machine learning is the process of training and learning, so it requires data to be studied, which can be called training data and data to be tested or data testing [13]. In general, there are two types of machine learning. Supervised learning has input variables and output variables and uses one or more algorithms to study the function of mapping input variables to output variables. The result of supervised learning is to estimate the mapping function so that if there is a new input, it can predict the output for the input. Unsupervised learning is a type of learning that only has input data or inputs but no output of related variables. The result of Unsupervised Learning is to model the basic structure in the data to study the data even further [14]. Prediction is a process of systematically estimating something that is most likely to occur in the future based on available past and present information owned so that the error (difference between something that happened and the estimated result) can be minimized [15]. C. Support Vector Regression Data mining is an analytic process designed to examine large amounts of data searching for valuable and socially hidden knowledge [16]. Data mining aims to look for desired trends or patterns in large databases to assist in decisionmaking in the future [17]. The data mining process consists of several stages: data selection, data cleaning, data transformation, use of data mining methods, and evaluation of patterns found. Data mining is divided into five methods based on functionality: estimation, prediction, classification, clustering, and association [18]. Support Vector Regression (SVR) is the development of a regression model of Support Vector Machines (SVM) that was originally used to describe classification problems. In this study, SVR was applied for the Time Series data type, which in the context of data has the meaning of data in the form of a series of events or observations taken sequentially over time. The goal of the SVR is to create more random data to be able to accept regression by mapping it at a higher dimension [19]. The general equation of regression can be seen as follows: where ω is the weight and b is the coefficient, φ(x) is the x feature mapping function at higher dimensions. This algorithm consists of several stages, among others: 1) Initialization parameters: In the SVR method uses several parameters, namely ε and C, which are influential in determining fault tolerance, CLR as a determinant of the speed of the learning process, σ as a constant that affects the distribution of data dimensions, and λ as a determinant of the scale of the SVR kernel mapping dimension [20]. 2) Hessian Matrix Calculation: The Hessian matrix is calculated according to the following equation: [R]ij = K(xi , xj) + λ 2 , for i and j=1,2,…,n Information: The following kernel functions are used to map the data dimensions to be higher, so it is expected to produce higher and structured data dimensions. The Gaussian Kernel (RBF) implementation was widely used in previous research and was considered capable of delivering good results in SVR. It is functions are defined as follows: Information: x dan xi = Data σ = Dimension Constant σ (sigma) as a constant dimensional need to be defined at the beginning so that the training data results do not look very accurate when the value is too small or too inflexible for complex calculations when the value is too large. Error Value Calculation. Calculation of changes in Lagrange multiplier values and their changes. For the first step, it is necessary to initialize the Lagrange multiplier value of αi and αi * of 0. Moreover, after that, the next sequence of steps is as follows: Calculation of error values (error) αi * ′ = δαi * + αi * αi ′ = δαi + αi The above stage is repeated for each training data. 3) Iteration process: The calculation stage of the above error value is then repeated (iteration) until one of these conditions is met. Iteration reaches the maximum iteration limit that has been determined. There is no change in value in the Lagrange multiplier or, in other words, convergence. The change in value has been met the Lagrange multiplier change value requirement no more than the epsilon constant (max(|δαi |) < ε and max(|δαi * |) < ε. 4) Calculation of Forecasting Results: The number of forecasting results is obtained after calculations from regression equations formulated such as equations: [21]. The Mean Absolute Percentage Error (MAPE) value can be calculated using the following equation: Information: Xt = Actual value Yt = Prediction value n = amount of data E. Research Approach The analysis method in this study aims to find out an overview of the closing price data of the Cryptocurrency Polkadot. In predicting the closing price of this Polkadot cryptocurrency using Support Vector Regression (SVR) analysis. Researchers used RBF kernels with cost parameters C=10, 100, 1000 and gamma=0.1, 0.01, 0.001, 0.0001, linear kernels with cost parameters C=10, 100, 1000.The tools used in this study are using the help of python 3.9 programming language and Libre Office. The stages of data analysis that researchers carried out is described through a flowchart as follows: Fig. 2 Flowchart Analysis Support Vector Regression Analytical steps:  Prepare daily data Polkadot in the period August 20, 2020, to December 31, 2021, downloaded from yahoo finance.  Perform descriptive analysis on Polkadot daily data to find out an overview of Polkadot daily data.  Preprocessing data includes defining dependent variables (Y) and independent variables (X). Then perform transformations on independent and dependent variables.  Divide data into two, namely data training and data testing.  Determine the kernel to be used and determine the cost (C) parameters, and gamma to perform Support Vector Regression analysis.  Perform Support Vector Regression analysis by first determining the parameters and kernels determined by the study of literature.  Tuning parameters to get optimal accuracy and minimal error.  Postprocessing is by denormalizing data to predict data.  Predicting Polkadot price data in the future.  Interpret the Support Vector Regression analysis results that have obtained the best parameters and kernel. F. Data Collection The data used in this study is secondary data. The data was obtained from several websites, namely from www.coinmarketcap.com to obtain Polkadot blockchain information data, then from the www.finance.yahoo.com website to obtain Polkadot daily price data. The period of Polkadot data collection used in this study is the daily price of Polkadot period August 20, 2020, to December 31, 2021, with the amount of data as many as 499 records. The variable used in this study is the closing price (Close) polkadot. The following is a table containing research variables and an explanation of the operational definitions of their variables: A. Preprocessing Data Data preprocessing is done to clean the data so that raw data is more easily received by support vector regression (SVR) algorithms. In this study, the preprocessing stage of data is to determine the input and output of dependent and independent variables, data normalization, and sharing training data and testing data. 1) Variable Determination: The data used for this study is Polkadot daily closing data in the period August 20, 2020, to December 31, 2021, consisting of 499 data. This study applies the type of learning that is supervised learning. Supervised learning requires input variables and output variables to be studied using algorithms. The input data used in this study is the daily closing price of Polkadot one previous period which was used to predict the price of Polkadot one day later. This problem is assumed that the price of Polkadot today is influenced by the price of Polkadot in one previous period. 2) Data Normalization: At this stage, the input and output data were normalized to the range 0 -1 using the help of minmax normalization modules. 3) Data Training dan Data Testing: The data were divided into training data and data testing to perform Support Vector Regression analysis. Data training sharing is done to improve the performance of Support Vector Regression to data testing in determining the best parameters for model formation. Data sharing can be seen in the following Table: TABLE IV SHARING DATA TRAINING AND DATA TESTING Information Data Total Training Testing Amount of Data 399 100 499 Percentage 80% 20% 100% Based on Table 4, the data sharing used in this study is 80% of the total data used as training/training data and the remaining 20% of the total data as data testing/trials. The amount of training data has a greater percentage because machine learning is better trained to learn the model. This is done so that machine learning informing models and models that are formed are trained using data testing to provide forecasting for more optimal data testing. The sharing of training/training data as well as data testing/trials are done randomly/randomly using the python programming language. Furthermore, data training was trained with the Support Vector Regression method so that a model is formed with a combination of parameters used, then data testing to test the results of the model formed from training data training. B. Support Vector Regression Analysis In theory, the Support Vector Regression method or SVR is an adaptation of the machine learning theory previously used for classification problems, namely, Support Vector Machine SVM. The Support Vector Regression method applies the support vector machine method for regression cases. For support, vector regression modeling is the same as support vector machine, which determines the optimal hyperplane through parameters to form a model. The concept of Support Vector Machine classifies support vectors into two classes, unlike the case with Support Vector Regression which determines parameters to form a model so that support vectors enter the hyperplane area to form an optimal regression model. The parameters used to form the model in this study are linear kernel parameters and radial basis functions. Furthermore, the focus of this study is on linear kernel parameters, radial base function with C parameters of 10,100,1000 as a support vector tolerance number to the hyperplane, then gamma parameters for radial kernel base function of 0.1, 0.01, 0.001, 0.0001. The model's performance is measured using the accuracy values of R-square and MAPE, the more R-square values approach the number 1 (one), the better the model, but the model should not be overfitting or underfitting. Overfitting is where the data used for training is the best so that if tested with different data can reduce the accuracy produced, while underfitting is a state where the training model is data that does not represent data. The entire data to be used to cause poor performance on the model. C. Evaluation of Support Vector Regression Model Evaluation of the model on Polkadot daily data is displayed the accuracy values of R-square and MAPE from each kernel used. The table-based Table compares the accuracy values of each kernel. Obtained accuracy value in test data or testing, namely linear kernel with cost parameter value C = 10 obtained an accuracy of 87.68% and MAPE value of 6.10 while RBF kernel with cost parameter value C = 10 and Gamma = 0.1 obtained accuracy value of 80.33% and MAPE value of 7.05. The linear kernel is between the two kernels with a fairly high accuracy value. Due to the formation of the optimal model and has a fairly low MAPE value. Furthermore, the researchers tuned parameters using the Grid Search algorithm method to improve the model's performance for the better. The following Table 6 shows the accuracy value after tuning the parameters. Tuning is the process of determining the parameters to get the best model. The Table shows each kernel's accuracy and MAPE values after tuning parameters. It was found that the accuracy and MAPE values of each kernel for data testing after tuning parameters and 10-cross validation there was an increase in model performance with R-square values of 87.68% for linear kernels and 90.00% for RBF kernels and MAPE values for each kernel of 6.10 and 5.28 so that the best models obtained were with the RBF kernel, It can be concluded that through tuning parameters, the model formed has the best accuracy value and MAPE. The optimal parameter formed to predict the daily price of Polkadot from the results of tuning parameters is for linear kernels with parameters C = 10, and RBF kernels with optimal parameters C = 1000 and gamma = 0.001. The following Figure 4 displays the plot graph of the RBF kernel with the best performance for comparison of actual data on Polkadot daily closing prices with training and testing data. In the image is the plot of actual data and predictions. The X(Date) axis is the order of actual and predictive data periods. The Y-axis (Polkadot price) is polka-dot's daily closing price on actual and predictive data. The actual data on the plot is a blue line, while the training prediction data is an orange line, and the testing prediction data is a green line. From the resulting plot that with the Support Vector Regression method kernel radial basis function (RBF), the resulting prediction data plot huddled following the actual data plot that shows the results of Polkadot daily closing price predictions are not much different from the actual daily closing price. D. Polkadot Daily Closing Price Prediction From the results of the experiment of parameters to form a Support Vector Regression (SVR) model that has been done, the next stage is to predict the daily closing price of Polkadot using the best model that has been formed before. The model generated from Polkadot daily price closing data shows good performance when viewed from the actual data line graph, and prediction shows the plot of prediction data following the actual data plot, which means the prediction data is not much different from the actual data. The following Table 7 shows prediction data and actual data on the daily closing price of Polkadot on the testing data using the best model that has been determined. The Table 8 is Polkadot's daily closing price forecasting shows that daily closing price forecasting results use the Support Vector Regression model with RBF kernel with parameters C = 1000 and Gamma = 0.001 for the next 10 periods. From the forecast results, Polkadot daily closing price experienced sideways in the number 26 to 28 US $ per coin. IV. CONCLUSION Based on the results of the research that has been described in the previous chapter, it can be concluded that: The results of the descriptive analysis that has been carried out can be seen in the period from August 20, 2020, to December 31, 2021, Polkadot's daily closing price movements fluctuate. The Support Vector Regression (SVR) method can be applied to predict the daily closing price of Polkadot. The Support Vector Regression (SVR) model obtained for Polkadot's daily closing price data, namely the Support Vector Regression (SVR) with a radial basis function (RBF) kernel with cost parameters C = 1000 and gamma = 0.001 obtained a model accuracy of 90.00% and MAPE of 5.28 while for the linear kernel with parameter C = 10, the accuracy result is 87.68% with MAPE value of 6.10. It can be concluded that through parameter tuning, the model formed has an accuracy value, and the best MAPE is to use a radial basis function (RBF) kernel with cost parameters C = 1000 and gamma = 0.001. The results of Polkadot's daily closing price forecast for the next 10 periods tend to experience sideways in the range of 26 to 28 US$ per coin.
2022-07-23T18:04:38.971Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "24b262002b97e4548393e4a9fd2fa5fc463382d7", "oa_license": "CCBYSA", "oa_url": "https://joiv.org/index.php/joiv/article/download/945/432", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "24b262002b97e4548393e4a9fd2fa5fc463382d7", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [] }
239003292
pes2o/s2orc
v3-fos-license
Vulnerability and Primary Health Care: An Integrative Literature Review The objective was to analyze the evidence available in the scientific literature on the concept of vulnerability, in theoretical perspectives and its use, in Primary Health Care. An integrative literature review was carried out with the inclusion criteria: articles in English, full text, peerreviewed, related to vulnerability and primary health care, with the explicit concept of vulnerability, and published until July 31, 2020. The electronic databases accessed were by crossing the descriptors “vulnerability,” “vulnerabilities,” “primary health care,” “primary healthcare,” and “primary care.” The final sample consisted of 19 articles. The thematic analysis produced 2 themes: “Theoretical foundations of the concept of vulnerability” and “The use of the concept of vulnerability in PHC.” In the second theme, 2 sub-themes emerged: Evaluation of health policies, programs, and services and Classification of individuals, groups, and families. There was a plurality of theoretical foundations for the concept of vulnerability and a smaller scope of its use in Primary Health Care. It is expected that the study will subsidize public policymakers and health teams in the design of services and actions aimed at vulnerable populations and in situations of vulnerability. Introduction Primary Health Care (PHC) is based on an in-depth knowledge of the territory where the people and families live, constituting one of the premises for the organization of health care practices. In PHC, the understanding of the individual, family, and community context allows an approximation with the social determinants of health (SDH). The World Health Organization (WHO) defines SDH as the circumstances in which people are born, grow up, live, and work, including the health system, and the economic, political, and social forces that shape them. 1 The recognition of the influence of social factors on people's health conditions is a contemporary theme that has directed various public policies, in Brazil and around the world. An important concept that can decisively support the comprehension of the dynamics of the territory and the SDH of a given community and, consequently, increase the knowledge of PHC teams in relation to the dynamics of the lives of the communities who's health they are responsible for is that of vulnerability. Etymologically, the term would have originated from the words "vulnerare" (hurt, harm, harm) and bile (susceptible to). 2 In the field of Bioethics, vulnerability refers to a state of being/being in danger or exposed to risk by an individual characteristic of the inherent fragility of human beings. 2 In health, this term has a broader connotation and is associated with the recognition that human beings may be susceptible to damage or risks due to social disadvantages. 2 In the national and international scientific literature, there is relevant production on the theoretical aspects of the term vulnerability, in different areas of knowledge, such as geography, economics, environmental health, aging, legal, and social sciences. 3 In the health literature, many articles use the concept of vulnerability to indicate the potential risk of developing certain diseases or suffering from environmental hazards. There are many publications dedicated to the study of vulnerable populations and the consequences of vulnerability for a worse state of health. In particular, articles that discuss vulnerability and PHC generally address health inequalities by relating them to PHC attributes (accessibility, comprehensiveness, coordination, continuity, and accountability). 3 However, little is discussed about its conceptual scope and its applicability in the scenario of health systems, specifically in PHC. It is essential to understand the founding characteristics of the concept of vulnerability, especially for the performance in the PHC, which carries out its practices in close connection with the territories. Identifying the most prevalent health problems and their determinants from the concept of vulnerability and its application can influence public policymakers and health teams in the programing and prioritization of actions based on the principle of equity. Accordingly, the aim of this review was to analyze the evidence available in the scientific literature in relation to the concept of vulnerability, from the theoretical perspective, and its applicability, within the scope of PHC. Methods The guiding question "What is the concept of vulnerability and its use in studies carried out in Primary Health Care?" was examined through a modified PICO strategy 4 : "P" Problem or target population of the study-Concept of vulnerability, "I" Intervention-Use of the concept in primary health care (PHC), "C" Control or comparison-without comparison, and "O" Outcome-Categorization of vulnerability in PHC. Type of Study and Methodological Procedures An integrative literature review was carried out. This approach was chosen since it allows for the integration of concepts, ideas, and opinions in a broader approach for the phenomenon studied. 5 The review was based on 6 steps: identification of the theme and selection of the research question, establishment of inclusion and exclusion criteria, identification of the pre-selected and selected studies, categorization of the selected studies, analysis and interpretation of the results, and presentation of the knowledge review/synthesis. 6 Inclusion and Exclusion Criteria The inclusion criteria defined for the literature search were: full, peer-reviewed articles available in English, related to vulnerability and primary health care, with the explicit concept of vulnerability and published prior to July 31, 2020. Dissertations, theses, reviews, editorial notes, and articles without access to the abstract and the full text were not included. Data Sources The electronic bibliometric databases accessed, in the period from August to September 2020, were: National Library of Medicine through the PubMed portal, Scopus, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Latin American and Caribbean Health Sciences Literature (LILACS). For the search strategy in the 5 databases, the following keyword was considered: "Vulnerability"; "Vulnerabilities"; "Primary Health Care"; "Primary Healthcare"; and "Primary Care," with the use of the Boolean operators "OR" and "AND." DECS/ MESH referred to these controlled descriptors. In the 5 databases, descriptors and the keyword were used in the English language (Table 1). Table 1.The step for the selection of the studies was performed blindly by 3 reviewers using the Rayyan software. 7 Initially, the title and abstract were read, applying the inclusion criteria. Dissenting opinions regarding the inclusion of some articles were resolved by the 3 reviewers, in a consensual and face-to-face way, resulting in the composition of the final sample. Subsequently, the 3 reviewers read the selected articles in full. Data from the studies were extracted using a validated instrument 8 and the articles were classified according to the level of evidence. 9 Analysis We describe the articles included according to the year of publication, title, authors, country, study design and 7 levels of evidence. Thematic analysis and synthesis of the evidence was conducted to examine the theoretical basis of vulnerability and its use in PHC. As this is a review article, the Prisma checklist was applied in the development of the study, observing the items relevant to the integrative review. 10 Results The database search resulted in a total of 2869 articles. With the removal of duplicates, 1201 articles were obtained. After reading the titles and abstracts, 1163 articles were excluded as they did not fulfill the inclusion criteria, leaving 38 articles eligible for review. In-depth reading of the 38 articles led to the exclusion of 19 (50.0%) that did not answer the guiding question and/or for the following 5 reasons: 3 (15.8%) discussed the concept of vulnerability in relation to environmental conditions only; 8 (42.1%) did not discuss the concept of vulnerability in the PHC scenario, 5 (26.3%) discussed the concept of vulnerability applied to specific groups, such as pregnant women or older adults, without being related to the specific PHC scenario, 2 (10.5%) did not directly discuss the concept of vulnerability and 1 (5.3%) was a literature review ( Figure 1). Accordingly, 19 articles constituted the final sample of the study. Of these 19 articles, ten (52.6%) were published in Brazil, 3 (15.8%) in the United States, 2 in Canada (10.5%), 2 in England (10.5%), 1 in the Netherlands (5.3%), and 1 in New Zealand (5.3%). Regarding the chronology, there was an increase in productions on the theme of vulnerability in PHC over the years, a fact also observed in Brazil, between the years 2015 and 2018. Of the 19 articles selected, 2 (10.5%) were characterized as level of evidence V and 17 (89.5%) as level VI, with a predominance of cross-sectional, descriptive studies, using qualitative analysis. Table 2 shows the articles included in this review according to the year of publication, title, authors, country, study design, and level of evidence. The thematic analysis of the eligible articles produced 2 major themes: "Theoretical foundations of the concept of vulnerability" (Table 3) and "The use of the concept of vulnerability in PHC" (Table 4). In the second major theme, 2 sub-themes emerged: (a) Evaluation of health policies, programs, and services; and (b) Classification of individuals, groups, and families (Table 4). Theme 1: Theoretical Foundations of the Concept of Vulnerability The thematic analysis revealed that the authors included in this review, in order to conceptualize vulnerability, were supported by different theoretical references that, at times, complement each other and at other times differ due to the opposition of ideas. In 9 articles (47.4%) cited Ayres et al 32, 36 3 articles (15.8%) used Aday 30,31 as a reference. In 4 articles, the vulnerability concept was composed of more than one theoretical framework, with the following authors cited in the selected articles: Baker et al 11 (Table 3). The articles based on the concept proposed by Aday 30,31 presented vulnerability as a result of the combination or overlap of several risk factors that in a given period of time can lead to physical, psychological, and/or social health problems. The risk factors cited by the authors included: race, ethnicity, income, insurance coverage, self-perceived health, parenting, and the mother's language. There was also a typification of vulnerability considering the subjective, biological, material, relational, and cultural components. Aday's 31 discussion of vulnerability also considers that health care centered on economic practices, fragmented care, and limited access affects more specific population groups. Similarly, Shi et al, 12 Stevens et al, 13 and Haidar et al, 24 supported by the concept of Aday, 30 recognized that isolated or overlapping risk factors impact the behavior of seeking health services and the condition of being healthy or recovering from a health problem. For these authors, health care directed toward vulnerable populations has to go beyond the needs of physical, social, and psychological health. It is essential to consider other elements of existing programs and policies that will support care and access to it, in the organizational and financial components, as well as to analyze the quality of the service provided and, consequently, its result. Accordingly, the authors emphasized that quality health care, provided in primary care, 12,13 mainly in the public health systems, has a potential to reduce the vulnerability resulting from various risk conditions, through the attributes of accessibility and continuity of care. Loh 23 used the concept of Shi et al, 38 who also stated that vulnerability involves a set of risk factors that reinforce each other, being derived from the absence of material and social resources essential to human well-being, from the presence of risk behaviors and from the influence of environmental factors. These authors criticized the dichotomous models of vulnerability analysis and proposed another format in which individual and community risk factors converge. Loh 23 also cited Mechanic and Tanner 39 when highlighting that vulnerability can become chronic and cumulative during the life of individuals and that in vulnerable families traces of vulnerability can be transmitted 46 These authors argue that the term social vulnerability allows a holistic approach to the measurement of individuals' social circumstances. Therefore, for them, social vulnerability would be different from the categorization by socioeconomic status or by the social determinants of health. The measurement of social vulnerability would be carried out using an index formed by various aspects of the social circumstances, consisting of 6 components: communication to engage in wider community, living situation, social support, social engagement and leisure, empowerment and life control, and socioeconomic status. According to da Silva et al, 14 this concept was first proposed in the Acquired Immunodeficiency Syndrome (HIV) epidemic to counter the concept of risk that placed the responsibility for the illness on individuals, increasing stigma, and prejudice. 11 Vulnerability conceptualized as a situation resulting from the interaction of factors such as poverty, racism, lack of social support, cultural differences, and social exclusion, with an impact on health. Baker et al 11 Shi et al 12 Vulnerability conceptualized as a convergence of risk, measured by 3 dimensions: predisposition (race, ethnicity); available resources (income and insurance coverage, characteristics of the community), and need (self-perceived health status). Aday 30,31 Stevens et al 13 Vulnerability conceptualized as a situation resulting from multiple overlapping risk factors: race, poverty, parenting, mother's language, and health insurance coverage. Aday 30 da Silva et al 14 Vulnerability conceptualized as the relationship between individual, collective, social, and resource availability aspects that can result in susceptibilities to illness or health problems. Ayres et al 32 Drewes et al 15 Vulnerability conceptualized as frailty related to decreased functions in the functional, somatic, social, and/or psychological domains in older adults. Fried 33 Guanilo et al 16 Vulnerability conceptualized as the relationship between individual, social, and institutional aspects (programmatic) and the political commitment of governments. Ayres et al 32 Mann et al 34 Souza et al 17 Vulnerability conceptualized as the state of individuals or groups that have their capacity for self-determination reduced, and may have difficulties in protecting their own interests due to deficits in power, intelligence, education, resources, strength, or other attributes. Barchifontaine 35 Ayres et al 32 Silva et al 18 (2015) Vulnerability conceptualized as the chance of suffering impairments or delays in childhood development, due to individual, social, and programmatic factors. Ayres et al 32 Pasqual et al 19 Programmatic vulnerability conceptualized as the way and the sense in which technologies already operating, such as health policies, programs, services, and actions, impact on a given situation. Ayres 36 Costa et al 20 Programmatic vulnerability conceptualized as the way in which institutions operate, especially those of health care, reproducing, or deepening socially given conditions of vulnerability. Ayres et al 32 Athié et al 21 Vulnerability conceptualized not only as an instability between a human being and a challenge of the environment, but also as a concept that links a vulnerable person to a coercive situation, a relationship established between the oppressor and the oppressed. Oviedo 37 Dias et al 22 Individual vulnerability conceptualized as the existence of factors of the individual that favor the occurrence of the harm; the program related to access to health services, its organization, the relationship between professionals and users, disease control, and prevention plans and the resources provided to serve the population; and social factors related to the environmental and economic conditions to which the individual is subject. Ayres et al 32 Loh 23 Vulnerability conceptualized as the grouping of multiple risk factors that reinforce each other, resulting from the lack of material and social resources essential to well-being, the presence of risk behaviors and the influence of environmental factors. It can be a chronic and cumulative characteristic and its traits can be passed on to family generations, as well as being interpreted as an antonym for resilience. Shi et al 38 Mechanic et al 39 Seery et al 40 Haidar et al 24 Vulnerability consisting of five types: self-reported (subjective), biological, material conditions, relational, or cultural. Aday 30,31 Coyle and Atkinson 25 Vulnerability conceptualized as a universally shared characteristic of human beings, which becomes amplified for some people through inherent disabilities or external structures of inequalities, in addition to the practices of the services. Fineman 41 Kittay 42 Levinas 43 Butler et al 44 Fernandes Bolina et al 26 Vulnerability constituted by the individual, social, and programmatic dimensions, used to identify susceptibilities to problems and health damage of people or communities. Ayres et al 32 Andrade et al 27 Vulnerability constituted by the individual, social, and programmatic dimensions. Ayres et al 32 Oldfield et al 28 Vulnerability conceptualized as structural vulnerability: poverty or racial/ethnic discrimination. Bourgois et al 45 Nguyen et al 29 Social vulnerability conceptualized as a term that allows a holistic and integrative approach to measure the social circumstances of individuals. Andrew et al 46 Guanilo et al 16 deepening this discussion, asserted that Ayres et al 32 expanded the concept proposed by Mann et al. 34 The authors stated that the situation of vulnerability is inversely proportional to the degree of personal responsibility, being constituted by factors associated with access to information and the social and health services, dependent on institutional and community programs, and with the factors related to social issues that increase, sustain or reduce individual responsibility. As Mann et al 34 suggested that vulnerability is the antithesis of responsibility, the concept takes on a preventive characteristic relating individual susceptibility to a given infection (in this case, the HIV). The authors said that the more responsible the individual was for the prevention of infection, the less susceptible s/he would be, that is, the more s/he participates in the prevention process, the less vulnerable the person would be. From this perspective, the authors indicated that the increase in individual responsibilization comes from factors associated with access to information and health services, the environment and social influences, understood by the context in which the person is inserted, which can sustain or resolve individual responsibility, influencing behaviors. Mann et al 34 also created the figure of the 3-dimensional cube relating vulnerability to 3 dimensions: individual, social, and health programs. From 8 pre-established indicators, countries were characterized as to the degree of social vulnerability, including, percentage of GDP, health expenditure, access to information (eg, number of radios and televisions), child mortality, and the development index, among others, which would define whether social vulnerability would be characterized as low, medium, or high. Regarding the health programs dimension, defined here by the national program to combat AIDS, indices were listed that assessed the program's capacity to reduce vulnerability to HIV/AIDS, such as planning and coordination, responding to treatment needs and obtaining resources. From this, a program could be classified on a scale of low, medium, or high vulnerability. 34 However, even if an individual is living in the same society, subject to the actions of the same political program of protection and care, the individual characteristics are the person's own, as are the interactions between the individual, the society and the program, which will determine the overall degree of vulnerability. 34 Therefore, the great difference of Ayres et al 32 was the expansion and adaptation of a concept used to estimate the prevention capacity for a given infection to a broader concept, capable of analyzing the condition of an individual or group inserted in a different reality, such as women in situations of violence, populations deprived of liberty and older adults, among others. Oldfield et al 28 resorted to the concept of structural vulnerability defined by Bourgois et al 45 with poverty or racial/ ethnic discrimination, exemplified by the limited proficiency of a language among family members, being associated with decreased access and quality of health care. More broadly, Baker et al 11 started from the principle that vulnerability does not result only from economic conditions, but from a sum of characteristics and situations to which certain population groups are exposed (eg, single mothers, older adults, unemployed people, and ethnic minorities) and that consequently impact on both their health and on seeking the care provided in PHC. The authors highlighted mental health as an important problem in this population, resulting mainly from deficient social support, exclusion, social dissonance, or racism. In view of this, vulnerability can be understood as a product of the history, way of life, and culture of our society. In addition to instability between the individual and the environment, Athié et al 21 recognized vulnerability as being associated with the links of interpersonal dependence that permeate the relations of power and coercion, from the perspective of Oviedo and Czeresnia. 37 Vulnerability would represent an inherent characteristic of the human being, considering the vulnerable character of life and the duality between life and death that accompanies human existence. Therefore, vulnerability can be biological, existential, and social, being characterized by events that affect the natural course of life, in the biological aspect, and limit the exercise of freedom and autonomy, in the existential and social field. 37 The concept of vulnerability, comprehended as frailty, in the presence of motor, physical, psychological dysfunctions, in the involvement of diseases or "deficits in social capacities" in older adults, which can lead to a decrease in the capacity to face adversity, causing "frailty at the existential and social level" was exposed by Fried et al. 33 Similarly, Souza et al 17 suggested the vulnerability concept based on the reduction in the capacity for self-determination and the protection of one's own interests, due to deficits in power, strength, cognition, and other attributes, using Barchifontaine 35 as a conceptual framework. This framework is in line with the understanding of Oviedo and Czeresnia 37 regarding the duality of life and death, since vulnerability can affect the ability of individuals to respond to the biological, economic, and social perturbations present throughout life. Vulnerability represented by the intrinsic physical and mental characteristics of people, by the external structures that promote inequality, and even by means of institutional medical practices, such as the diagnosis, was exposed by Coyle and Atkinson. 25 Four types of vulnerability were presented in the social context of care and well-being: vulnerability as embodied difference; as entrenched inequality; as universal; and as a resource for resistance. This typology was supported by the concepts of vulnerability by Fineman 41 ; Kittay 42 ; Levinas 43 ; and Butler et al, 44 respectively. There is a difference between the individual, social, and political approaches, with the argument that these various positions are not necessarily opposed. The possibility that vulnerability is a resource for resistance was emphasized, from a philosophical and feminist ethics perspective of care. Theme 2: The Use of the Concept of Vulnerability in PHC In the theme "The use of the concept of vulnerability in PHC," the thematic analysis resulted in the elaboration of 2 sub-themes: Sub-Theme A: Evaluation of Health Policies, Programs, and Services In 11 [11][12][13][18][19][20][21]23,24,27,28 (57.9%) of the 19 articles analyzed, the vulnerability concept was used to assess the response of health policies, programs, and services in relation to individuals or groups identified with some situation of vulnerability, within the context of PHC (Table 4). Some authors discussed the response capacity of the health services according to specific groups' access to health care. Athié et al 21 investigated the experiences of women with common mental disorders (anxiety, depression) in relation to emotional suffering and seeking care. The analysis of the participants' narratives was based on the perspective of vulnerability and accessibility to mental health actions in PHC. Stevens et al 13 concluded that children and adolescents recognized as more vulnerable, due to the juxtaposition of multiple family and community risk factors, had a worse state of health, exacerbated by the finding of difficulty in access and continuity of health care, especially in PHC. Baker et al 11 identified the health requirements of vulnerable groups to assess care by general practitioners and, despite finding worse health conditions among these groups, emphasized that there was no greater demand for the service by this population. However, they recognized that the guarantee of accessibility alone does not reflect an improvement in the health condition of these groups and that actions should consider, in addition to low income, disaggregation, and cultural diversity. Five articles described in Table 4 discussed the evaluation of services regarding the response to certain situations of vulnerability from the perspective of the programmatic dimension. Silva et al 18 used the concept to evaluate the actions of health services and public policies regarding care for children and adolescents in relation to childhood development. Programmatic vulnerability was also addressed in the descriptive epidemiological article by Pasqual et al 19 to assess the care provided to women (over 50 years of age) in a PHC unit. They found worrying aspects of vulnerability associated with the low coverage of health actions recommended by the municipal health policy to address chronic and gynecological diseases in this population. Similarly, Costa et al, 19 through an exploratory-descriptive, qualitative study, analyzed the local public health agendas aimed at confronting violence against rural women. The study was based on the recognition that the lack of local public health agendas for policies and programs related to women's health affected, in particular, rural women in situations of violence, making it difficult to confront the violent situations. Loh 23 described and discussed the medical practice in relation to vulnerable patients, highlighting the importance of recognizing social vulnerability, at the time of the medical consultation and for the doctor-patient relationship. The author stated that some conditions, such as living on the street, adolescent pregnancy, or child abuse, do not necessarily produce poor health conditions due to a direct causeeffect relationship, but because they are indicative of more upstream adversities. It is important to note that the author emphasized that it is not enough to merely guarantee access, but that health services must be structured to provide integrative care to vulnerable patients. This in-depth view of the health service was studied by Andrade et al 27 who evaluated, through the perception of health managers, the quality of the service offered to the population in controlling and combating specific diseases, such as arboviruses. The authors explored the concept of vulnerability from the perspective of the quality of health service, which covers the interrelationship between the different actors involved (population served, providers, and health managers) and their specific and collective roles that contribute to qualify or weaken the service provided to individuals and the community. The vulnerability concept was also used to analyze users' perceptions of the performance of health services in the 3 articles described below. Shi et al 12 recognized that a continuous and trusting relationship between healthcare providers and the patient has a positive effect on health, even in vulnerable populations. However, they emphasized that the model of organization and regulation of the health service, provided through corporate or private healthcare plans, can impair this positive perception of the care received. On the other hand, they emphasized that, regardless of the care model, quality primary care considerably reduces the care inequalities in the most vulnerable groups, especially in relation to accessibility and continuity. Haidar et al 24 analyzed the influence of 5 individual vulnerabilities and the interaction between different types, in the evaluation of the experience of primary care, in a universal healthcare system. The study confirmed that individual vulnerabilities were generally associated with a positive assessment of the primary care experience, with the exception of cultural vulnerability. Therefore, these Canadian authors found the existence of vulnerability to be a protective factor against poor assessment of the PHC experience, within a universal healthcare system. Finally, the authors Oldfield et al 28 supported by the concept of structural vulnerability, elected a population group of parents and family caregivers and proposed a group approach to childcare in a PHC center in the United States 19 Evaluation of the care process provided to women, from the age of 50, in a Family Health Unit. Dias et al 22 Analysis of the family vulnerability of children with special needs for multiple, complex, and continuous care. Costa et al 20 Study of the confrontation of violence against rural women, based on the analysis of public agendas. Coyle and Atkinson 25 Instrument for the diagnosis of multiple conditions (mental illnesses and disabilities) in people seen at a Mental Health Center. Athié et al 21 Analysis of women's access to mental health care, based on their narratives in relation to suffering and the care. Fernandes Bolina et al 26 Analysis of the dimensions of vulnerability in older adults associated with socioeconomic factors. Loh 23 Analysis of the health system's response to the patient's vulnerability in the family medicine setting. Nguyen et al 29 Description of social vulnerability in patients with multimorbidity, using a vulnerability index, to examine its correlation with the number of chronic conditions and to investigate the chronic conditions associated with the state of greatest social vulnerability. Haidar et al 24 Analysis of the influence of vulnerability on the evaluation of the experience of care in a universal health system. Andrade et al 27 Quality measure of the health service provided by the city to cope with diseases related to Aedes aegypti. Oldfield et al 28 Analysis of the participants' perception of the use of health services, addressing predisposing factors, facilitators, and the need to use health services. of America. They analyzed the participants' perception regarding the use of health services, addressing predisposing factors, facilitators, and needs for the use of health services, according to Andersen's theoretical model. Sub-Theme B: Classification of Individuals, Groups, and Families Of the selected articles, 8 [14][15][16][17]22,25,26,29 used the vulnerability concept to classify individuals, groups, or families, with 5 of these studies carried out in Brazil. 14,16,17,22,26 Da Silva et al 14 Coyle and Atkinson 25 proposed a combined approach to vulnerability, through a dialogical analysis between the experiences of diagnosis in PHC and the reports of people with multiple health problems and/or disabilities attended in a social institution. The authors promoted a reflection on how the communication of the diagnosis in the medical practice can be characterized as a situation of institutional vulnerability if physicians rely on a restricted concept that places patients with multiple health problems as people with limited capacity and in need of paternalistic protection. The article also presents vulnerability as a resource for people's resistance to adversity. Three aspects of the use of the vulnerability concept were identified in the selected articles, characterized by the classification of groups and families, construction of markers and indices, or the evaluation of services, considering the political dimensions, the access and the experience of the care in PHC. Discussion The conceptual construction of vulnerability arose in the area of human rights, identifying people or groups that are legally fragile and need their rights protected. Accordingly, the vulnerable would be people with mental or physical disabilities, children and adolescents, older adults and those institutionalized. This view does not include people and population groups subjected to situations of vulnerability, especially those derived from social, cultural, economic, institutional, and political contexts, characterizing social vulnerability. However, during the AIDS epidemic, in the 1980s, the concept started to involve micro-and macroenvironmental aspects, as well as the individual's interaction with the social and political circumstances, covering the fields of health and social sciences. 37 This review explained the procedural construction of the concept of vulnerability, through the thematic analysis of the production of the authors studied. Some researchers criticized the need for the vulnerability concept to incorporate cultural, institutional, social, and biological characteristics, transposing the reductionist presence of risk as its structuring attribute present in some studies that conceptualized vulnerability as the result of the combination or overlapping of risk factors. Other scholars discussed the vulnerability concept as frailty derived from the reduction of domains in the older adult population. Some researchers made interesting reflections on the concept of vulnerability, translating it as an antonym for resilience, antithesis of responsibility, 35 and resulting from the relationship established between the oppressor and the oppressed. 37 Other authors emphasized its generational character in vulnerable families 39 and that vulnerability is an inherent characteristic of human beings. 37 Most of the articles in this review were based on the vulnerability concepts proposed by Aday et al 30,31 and Ayres et al. 32,36 It is important to note that Aday et al 30,31 constructed a concept of multifactorial vulnerability, with overlapping or combined factors producing situations of vulnerability. And multidimensional, including subjective, biological, material, relational, and cultural components. These authors also reported that the most vulnerable population groups could be more harmed if health services were centered on economic aspects by restricting this population's access and offering fragmented health care. In turn, Ayres et al 32,36 presented a concept of vulnerability composed of individual, programmatic, and social dimensions that expressed the potential for illness, non-illness, and coping, which could be applied in the individual scope or collective contexts and conditions. This concept also included the ability of individuals and social groups to fight and recover from health problems. Furthermore, for these authors, vulnerability constituted an indicator of health inequity and social inequality. It can be said that due to the scope of these 2 concepts, they were essential for the development of research in PHC, both for the investigations developed for the evaluation of health policies, programs, and services, as well as for the classification of individuals, groups, or families in situation of vulnerability. The breadth of the concepts elaborated by these authors allowed research in PHC to address the complexity of the problems observed in the care of individuals, families, and the community, as well as the dynamics of the territories inhabited by these people. Consequently, these concepts contributed to the understanding of the problems and the design of possible interventions to minimize the effects of situations of vulnerability in the lives of these populations. In relation to the use of the vulnerability concept in PHC, the thematic analysis allowed the categorization of evidence, highlighting the power of the concept, from the perspective of evaluating health policies, programs, and services, to verify issues of health access, 11,14,21,23 of equity, 13 of coping with violence, 20 of the quality of PHC health services, 27 and of the patients' perception of the performance and use of PHC health services 12,28 of population groups such as children, adolescents, women, and vulnerable people. Researchers, when using the vulnerability concept to assess the care experience in PHC, found that individual vulnerabilities were associated with a positive assessment of the experience of primary care, in a universal health system, with the exception of cultural vulnerability. 24 This finding reinforces the role of universal health systems in guaranteeing more equitable access and health care to vulnerable people or those in vulnerable situations. In the thematic category in which the use of the concept of vulnerability classified individuals, groups, and families, there was the possibility of its use to measure the degree of family vulnerability 14,22 and of specific groups, such as older adults, people with mental illness and women, 15,25,26,28 and for the development of instruments for measuring social vulnerability correlated with multimorbidity and the classification of families with older adult members. 16,17 It is considered important to reinforce the criticism about public health research frequently using the term "vulnerable" in an undefined way, making it difficult to understand who the vulnerable people would be and the reasons for this vulnerability, even though some specificity is present in the approach of population groups. The emphasis on the inherent characteristic of vulnerability, to the detriment of the discussion about the possibility of political or procedural changes altering a condition or situation of vulnerability is usually noted in scientific articles. 47 For these authors, the inaccuracy associated with the word "vulnerable" conceals the structural nature of public health problems, favoring the concealment of power relations, in addition to limiting the discussion about the structural transformations necessary to confront situations of vulnerability. This finding justifies the development of studies that enable a clear comprehension of the concept of vulnerability, in public health and especially, in PHC, as a human condition and a concrete situation. In this way, the results of studies on the features of vulnerability can facilitate the understanding of the singularities that make people vulnerable and produce situations of vulnerability and, consequently, contribute to the planning and provision of more equitable and integrative health care. Thus, health teams and decision-makers need to have standardized tools that help identify vulnerable people and develop more equitable and comprehensive interventions that produce better health outcomes. In this perspective, studies on the characteristics of vulnerability can facilitate understanding the singularities that make people vulnerable or have situations of vulnerability. Furthermore, 2 of the significant challenges need to be faced by health system managers. First, to develop cross-cutting public health equity policies based on intra-and intersectoral action, creating horizontal discussion spaces that stimulate dialog and consensual decision-making among managers, health professionals, and the population. 48 Therefore, health care needs to be organized in a network, recognizing the interdependence of actions in their different points and the need for articulated coordination of the health work process. At the same time, it is necessary to invest in awareness and qualification of health professionals, implementing differentiated care approaches for vulnerable groups. 48 Also, the results of this study can support future research that proposes strategies more directed to vulnerable groups and health teams to meet the health needs of this population better. A limitation of this review is that it was restricted to articles in the English language, therefore new studies are suggested that incorporate other languages, which would allow for further analysis in relation to the object studied. Conclusion The main contribution of this integrative review is to highlight the complexity of the vulnerability concept represented by the ephemerality of human life and the intersection of multiple factors such as income, race, ethnicity, gender, access to health care, poverty, self-perceived health, education, biology, behavior, language, and culture, constituting an intrinsic relationship between the individual or population groups and the structure of society. Its use in PHC instrumentalizes the health practice, despite some authors understanding that this action could be based on historical and causal determinism. It is therefore believed that the comprehension of the complexity and breadth of the concept of vulnerability and its use in the field of PHC confirms the need for firm intersectoral and political action to mitigate its undesirable effects on people's health and lives.
2021-10-17T06:17:13.657Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8e659451816606c5fa00631949474939164511e1", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21501327211049705", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f1395af02fe6417f15c7529b5c3ccbd33ff7da5", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
115018692
pes2o/s2orc
v3-fos-license
Optical analysis of ball-on-ring mode test rig for oil film thickness measurement There are few experimental results available on film thickness at speeds above 5 m/s and they are almost all based on the optical ball-on-disc test rig. In contrast to the contacts in a rolling bearing, in which the lubricant in the oil reservoir distributes symmetrically, ball-on-disc contact shows asymmetry of lubricant distribution due to centrifugal effects. In order to closely imitate the contact occurring between the ball and the outer ring of a ball bearing, this study proposes an experimental model based on ball-on-glass ring contact. An optical matrix method is used to analyze the optical system, which is composed of a steel ball-lubricant-chromium-coated glass ring. Based on the optical analysis, the measurement system is improved in order to obtain a high quality interference image, which makes it possible to measure the film thickness at high-speeds conditions. Introduction Over the past few decades, the measurement of lubricant film thickness has caught considerable attention of many researchers in the elastohydrodynamic lubrication (EHL) community. Given that the film thickness is extremely thin in the Hertzian contact region, an optical interferometry-based method can measure the film thickness accurately. This is shown by the experimental investigations of EHL, owing to a comparable magnitude between optical light wavelength and lubricant film thickness. In the 1960s, Gohar and Cameron first applied optical interferometry to the measurement of oil film thickness in EHL, capturing the first classical interference image with a rotating steel ball loaded against a glass plate [1]. They then gave the results both in point and line contacts in their following works [2]. The subsequent development and improvement of this technique were focused on the lower limit of measurable thickness and higher resolution. The technique used by Spikes and his group could measure films down to less than 5 nm at very low speeds, which combined a spacer layer with spectrometric analysis [3]. Based on this technique, an imaging system was developed to profile the film in the EHL contacts [4], and later extended to obtain the film thickness distribution in the contact area by combining the spacer layer technique with the color image analysis [5]. Aiming for accurate determination of lubricant film thickness distribution, Hartl's group developed an experimental technique that combined chromatic interferometry with a computer image processing method. They obtained a three-dimensional distribution of the EHL film in a range of 60-800 nm with high resolution and accuracy [6]. This colorimetric interferometry used by the same group allowed the study of ultra-thin lubrication films down to 1 nm [7]. The technique, based on monochromatic interferometry, also had a low minimum measurable thickness and high resolution. The relative optical interference intensity method proposed by Luo et al. determined the film thickness by the interference intensity of the point in the contact region, with the location of this intensity between the Nomenclature x, x′ coordinates in the radial direction of the ring at which the ray intersects with the input and output planes α, α′ angles of the projections of rays onto the xz plane with the optical axis at the input and output planes y, y′ coordinates in the axis direction of the ring at which the ray intersects with the input and output planes distance between the ring and the cylindrical lens in the corrected system d a2 distance between the lens and the cylindrical lens in the corrected system m ij the element at the i th row and j th column of M L width of the homogeneous medium layer n 1 , n 2 refractive index of glass K9 and air ρ radius of the sphere, ρ > 0 for convex (center of curvature after interface) f focal length of lens, f > 0 for convex/positive (converging) lens R radius of the ring, R > 0 for convex (center of curvature after interface) R c radius of the cylindrical lens, R c > 0 for convex (center of curvature after interface) darkest and brightest fringes. The intensity came from the interference of two reflected beams [8]. Considering the effects of multi-beam interference and the optical absorption of metals, Guo and Wong [9] developed a multi-beam intensity-based technique with a resolution of 1 nm and a minimum measurable thickness of 1 nm. Experimental studies to measure the film thickness in EHL are mainly based on abovementioned optical interferometry mechanism. Most research studies have been focused on the lubrication performance at speeds lower than 5 m/s. Only a few experimental studies have explored the film thickness behavior at speeds higher than 5 m/s. The limited experimental results at very high speeds show that elastohydrodynamic films are thinner than that predicted by classical theoretical models, such as the Hamrock and Dowson equation. Hili [10] concluded that the inlet shear heating has a significant effect on the behavior of film thickness at very high speeds up to 20 m/s. Liang et al. [11] studied the behavior of starved EHL contacts at high speeds up to 42 m/s with ball-disc contact, the results of which indicated that centrifugal force significantly affects the starved behavior of the lubricant film. However, almost all published experimental studies on EHL film thickness measurement of the point contacts are based on ball-on-disc or ball-on-plate contacts, where the speed distribution in the ballon-disc is different from the speed distribution in a rolling bearing. Compared to a rolling bearing, where the centrifugal force is perpendicular to the contact surface, the centrifugal force in the ball-on-disc contact is parallel to the disc face, which drags the oil out of the contact region. This may increase the oil supply, and also lead to the interference image haziness as mentioned in Liang's study [11]. This paper reports a new ball-on-ring model for the measurement of oil film thickness, which may be suitable to imitate contact conditions occurring in a high-speed ball bearing. This model avoids the uneven distribution of lubricant between the two sides of the "raceway", which occurs in the ball-on-disc contact because centrifugal force pulls the lubricant away from the contact region. Owing to unexpected optical refraction at the outer cylindrical surface of the ring, the initial interference images are fuzzy and out-offocus. To solve the problem, the matrix optics method [12] is used to analyze the optical system, and a correction approach is proposed to modify the optical path. The preliminary experimental results prove the feasibility of the ball-on-ring model in the measurement of oil film thickness. Optical analysis of the ball-on-ring model The test rig, as shown in Fig. 1, was developed to measure the film thickness of the lubricant in the point contact region. The test rig imitates a rolling bearing in the form of a steel ball in contact with a ring made of K9 optical glass. A supporting system containing four well-adjusted small rolling bearings underneath the ball ensures that the ball rotates around an axis parallel to the glass ring's central axis. The inner surface of the glass ring is coated with a very thin, semi-reflective layer of chromium with the reflectivity of about 20 percent, thus when the incident light has reflected from the steel ball and refracted at the chromium, two interfering beams have approximately equal intensities, which enables the interference to occur and the film thickness to be determined. A laser device is used to provide light with good coherence for optical interference image. The interference images are magnified using a coaxial microscope and then captured by a charge coupled device (CCD), which is coupled with the microscope. Figure 2 shows a set of interference images captured by the newly developed test rig at different object distances, ranging from far to near the ring. It was found that the resultant interference images were not clear and showed blurring and ghosting. Owing to the haziness of the interference image, it could not be used to determine the film thickness. The reason for the image haziness should be explored in order to improve the quality of the interference images. Therefore, an optical model for the test rig was developed to conduct optical analysis based on the following optical matrix method. The optical transfer matrix provides an effective method of analyzing the cause for the fuzziness in the interference images shown in Fig. 2. Figure 3 gives the definition of a ray, where h x and h y are the coordinates of the point of intersection at which the ray intersects the reference plane, α is the angle between the projection of the ray in the xz plane and z axis, and  is the angle between the projection of the ray in the yz plane and z axis. The propagation of a paraxial ray through a system of centered lenses can be written in the matrix form: where the matrix M is the optical transfer matrix. Figure 4(a) shows the optical model of the ballon-ring test rig system. Assuming that the interference images emerge at the inner surface of the ring, and the contact region of the ring is the plane (the depth of the image region in the optical axis direction is much shallower than the depth of the field of the microscope), the optical system of the ball-on-ring test rig can be simplified as an ideal optical system consisting of a cylindrical interface, a thin lens, and several homogeneous medium layers as shown in Fig. 4(b). It should be noted that we use a thin lens to represent the microscope, because the microscope consists of lenses with strict axial symmetry and has the same refraction ability for the rays in different directions, then it will has the same function as a thin lens to converge the light ray from an object point to an image point. For simplicity and clarity, we first consider the propagation of light in the xz plane. Light traveling from one surface to another in a homogeneous medium layer with thickness L obeys the following equation: Light traveling through a thin lens with a focal length f obeys the following equation: Light traveling through a curved interface between two mediums with refractive indexes n 1 and n 2 (light travels from n 1 to n 2 ) obeys the following equation: where  is the radius of the curved interface. Similarly, the propagation of light in the yz plane can be obtained. Then, the above cases are extended to three dimensional situations. The propagation of light can be described as follows: When there is both a homogeneous medium layer and a thin lens, M x = M y . However, for a ray traveling through a cylindrical interface, the refractions of the ray at the cylindrical interface in the xz plane and the yz plane are different due to the different radii; therefore, from Eq. (4), Eq. (5) becomes Therefore, the optical transfer matrix of the simplified optical system for the ball-on-ring model, as shown in Fig. 4(b), can be represented in terms of matrix multiplication as follows: where M o is the optical transfer matrix when light travels in the glass ring, M r is the transfer matrix when light travels through the outer surface of the glass ring, M a is the transfer matrix when light travels in the air between the glass ring and the lens, M lens is the transfer matrix when light travels through the lens, and M i is the transfer matrix when light travels in the air between the lens and the CCD. They are described as follows: In order to hold the object-image relationship, the entries S 12 and S 34 of the transfer matrix S must be equal to 0, that is   S 12 denotes the object-image relationship in the xz plane. S 34 denotes the object-image relationship in the yz plane. It can be found from Eq. (8) that S 12 is related to the radius R of the outer surface and the thickness d o of a glass ring, while S 34 is only related to the glass ring thickness d o , other than the position of microscope and CCD. If it is assumed that the air distance d a (namely the position of microscope) is a variable, and the other parameters are fixed, the set of Eq. (8) has no solution. This means that the object-image relationship cannot hold in the xz and the yz planes simultaneously. Assuming a d x and ay d are the required air distances that satisfy the object-image relationship in the xz and the yz planes, respectively, a d x and ay d can be obtained from Eq. (8) is less than the depth of the field, the camera can still obtain clear images. However, a microscope has to be used in the test rig to magnify the interference image in the contact region in order to analyze the film thickness. The depth of the field of the microscope is usually very shallow, particularly with higher magnification. Table 1 shows the parameters used in the test rig in Fig. 1, and interference images blurring and ghosting as shown in Fig. 2. The above analysis of the blurring and ghosting of the interference image is schematically shown in Fig. 5. To easily compare the imaging of the system in the xz plane and the yz plane, the optical paths in both xz and yz planes are drawn in the same diagram. Based on Snell's law, the optical path in the yz plane refracts more outwards than that in the xz plane at the outer surface of the ring. Thus, as shown in Fig. 5(a), rays from object point focus on the CCD in the xdirection, while behind the CCD in the y direction when d a is equal to d ax . Therefore, the interference fringe is clear in the x direction, and blurry in the y direction. If d a is equal to d ay , rays from the object point focus on the CCD in the y-direction, while in front of the CCD in x direction. Therefore the interference fringe is clear in the y direction and blurry in the x direction, as shown in Fig. 5(b). When d a is in between d ay and d ax , the interference fringe will be blurry in both directions as the rays will not focus on the CCD in both directions, as shown in Fig. 5(c). In Fig. 2, the images from left to right were captured by the test rig while increasing the observation distance, which corresponds to the increase of d a from d ax to d ay . Therefore, the blurring and ghosting of the interference image are consistent with the results of the matrix theory. Such interference images cannot be used to determine the oil film thickness because the interference fringes are not adequately sharp and clear. The measurement must be taken after improving the test rig to obtain high-quality interference images. Based on the above optical analysis, in order to obtain clear interference images, the optical path must be corrected according to the formulation of the optical system. Thus, a cylindrical lens is introduced to adjust the different refractions in the x and y directions at the outer cylindrical surface of ring, as shown in Fig. 6. The axis of the cylindrical lens is parallel to the axis of the glass ring, and intersects perpendicularly with the optical axis of the system. When a light arrives at the cylindrical lens, it first refracts on the cylindrical surface, then travels in the cylindrical lens, and finally refracts at the flat surface leaving the cylindrical lens. Thus, the optical transfer matrix of this cylindrical Then, the optical transfer matrix of the corrected optical system is as follows: (12) where M a1 is the transfer matrix when light travels in the air between the glass ring and the cylindrical lens, while M a2 is the transfer matrix when light travels in the air between the cylindrical lens and the lens. They are depicted as follows: where d a1 is the distance between the glass ring and the cylindrical lens, and d a2 is the distance between the cylindrical lens and the lens. Based on the object-image relationship, d a1 and d a2 can be obtained through Eq. (12) in the x direction and the y direction as follows: According to Eq. (13), d a1 depends only on the glass ring and the cylindrical lens, while d a2 depends on the glass ring, cylindrical lens, and microscope. As Fig. 7 shows, using the parameters in Table 2, d a1 increases with increasing radius of the cylindrical lens used, while d a2 decreases. To correct the light deviation at the outer surface of the ring, the radius of the cylindrical lens R c must be larger than or equal to the outer radius of ring R, and smaller than the value limited by d a2 in order to avoid interfering with the microscope objective lens. Thus, by mounting a cylindrical lens with the proper distance described above, the optical system can focus the interference images on the CCD simultaneously in the x direction and y direction. Thereupon, analyzable interference images can be obtained by this corrected optical system. Discussion For an optical system with a cylindrical interface, where unexpected optical deviation occurs, a coupled cylindrical lens with the same diameter can avoid its position adjustment by closely fitting with the interface [13]. However, in the current ball-on-ring test rig, the glass ring rotates at a very high speed, and thus, the coupled cylindrical lens cannot be fitted closely on the ring due to friction, damage of the optical surface, and interfering with the fixture. According to the result for d a1 given in Fig. 7, a concave cylindrical lens with a diameter of approximately between 200 mm~ 1,600 mm could be used to overcome the problem of fuzzy images and avoid interfering with other elements in the original ball-on-ring test rig. So instead, a concave cylindrical lens of 1,000 mm in diameter is mounted above the glass ring by a distance of about 30 mm. The corrected ball-on-ring measurement system was able to precisely capture the optical interference images at the ball-on-ring contact region. In order to check the validation of the proposed approach, the test rig was improved with the updated optical imaging system, and preliminary experiments were conducted under a tractive rolling condition, where the steel ball was driven by the glass ring. Figure 8 shows the images captured by a monochrome CCD with a coherent light source in the same incidence condition as Fig. 2. Figures 8(a) and 8(d) show the interference images captured by the uncorrected image system. Figures 8(b) and 8(e) show the interference images captured by the uncorrected imaging system, but with small aperture diaphragm. Figures 8(c) and 8(f) present the interference images captured by the corrected imaging system. Figure 9 shows the optical intensities of the images in Fig. 8 at the centerline along the motion direction and the film profiles corresponding to these images. Figures 8(c) and 8(f), captured by the corrected system clearly show the change in the interference fringes over the entire contact region, as they have high saturation and definition. This validates the effectiveness of the proposed ballon-ring test rig. Figures 8(a) and 8(d), captured by the uncorrected test rig, are only in-focus in a very narrow field. Although they also have good saturation, most regions of the images have gray level deviations compared to Figs. 8(c) and 8(f), resulting in the different film thicknesses shown in Fig. 9. Using a smaller aperture helped to reduce the cone angle of the rays, resulting in a clear image in the center region, as shown in Figs. 8(b) and 8(e). However, the narrower the aperture, the darker the image becomes when the other factors are kept constant. Meanwhile, the length of the contact radius in the x direction is magnified because the cylindrical surface refracts the rays in the yz plane through larger angles (see Eq. (6) and Fig. 5), while aperture limits the light admission. The aberration of the contact radius can also be observed in Fig. 9. The images captured by the corrected system have an extra magnification in the x direction due to the cylindrical shape of the glass ring and the cylindrical lens, other than the magnification of the microscope. In an optical system of centered lenses, the lateral magnifications are m 11 and m 33 in Eq. (1). In this paper, as shown in Fig. 10, the corrected system has different lateral magnifications in the x and y directions because of the radii of the ring and the cylindrical lens used. It should be noted that, the blurring phenomenon in the interferogram can be eliminated by the usage of the concave cylindrical lens, and the quality of the interference image can be further improved by choosing appropriate parameters for the Cr-coated film thickness, coating material and so on, which can be referred in Ref. [14]. Conclusion The development of modern equipment continuously increases the demand of the rolling bearing, for which the lubrication plays a very important role, particularly at high-speed/heavy-load conditions. In order to explore the lubrication behavior of highspeed ball bearing, this paper presented a new test rig for the optical measurement of oil film thickness based on a ball-on-ring mode. This closely imitated the contact between the ball and the outer ring in a ball bearing. The following conclusions can be drawn from the results of this study. Owing to the unexpected optical refraction at the outer cylindrical surface of the ring, the interference images were blurry and exhibited ghosting, which is not suitable for determining oil film thickness. An optical model was developed to analyze the measurement system of the ball-on-ring mode. Based on the optical analysis, the reason for the haziness of the interference image was explored. An improved measurement was proposed to obtain high-quality interference images to successfully investigate the lubrication behaviors of high-speed rolling bearings.
2019-04-15T13:08:05.759Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "f7745b42ad8a4987348ff311a4316febdd97793d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-016-0127-5.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8abd4ed928b40526d558b28682b902f71b999621", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
249657105
pes2o/s2orc
v3-fos-license
Recent Advances in Production of Ecofriendly Polylactide (PLA)–Calcium Sulfate (Anhydrite II) Composites: From the Evidence of Filler Stability to the Effects of PLA Matrix and Filling on Key Properties The melt–mixing of polylactide (PLA) with micro- and/or nanofillers is a key method used to obtain specific end-use characteristics and improvements of properties. So-called “insoluble” CaSO4 (CS) β-anhydrite II (AII) is a mineral filler recently considered for the industry of polymer composites. First, the study proves that AII made from natural gypsum by a specifically thermal treatment is highly stable compared to other CS forms. Then, PLAs of different isomer purity and molecular weights (for injection molding (IM) and extrusion), have been used to produce “green” composites filled with 20–40 wt.% AII. The composites show good thermal and mechanical properties, accounting for the excellent filler dispersion and stability. The stiffness of composites increases with the amount of filler, whereas their tensile strength is found to be dependent on PLA molecular weights. Interestingly, the impact resistance is improved by adding 20% AII into all investigated PLAs. Due to advanced kinetics of crystallization ascribed to the effects of AII and use of a PLA grade of high L-lactic acid isomer purity, the composites show after IM an impressive degree of crystallinity (DC), i.e., as high as 50%, while their Vicat softening temperature is remarkably increased to 160 °C, which are thermal properties of great interest for applications requiring elevated rigidity and heat resistance. Introduction The high interest and progress in the production of biosourced polymers such as polylactide or poly(lactic acid) (PLA), is connected to a large number of factors, including the increase in requests for more environmentally sustainable products, the development of new biobased feedstocks and larger consideration of the techniques of recycling, increase in restrictions for the use of polymers with high "carbon footprint" of petrochemical origin, particularly in applications such as packaging, automotive, electrical and electronics industry, and so on [1][2][3][4][5][6][7][8]. Nowadays, when looking for a sustainable society and environmentally friendly products, the market turns to more "durable" applications, therefore important demands can be expected for new biomaterials which clearly offer multiple benefits to customers. Still, for many applications, the carbon footprint of products can be reduced by replacing "fossil carbon" with "renewable carbon" [9]. of our knowledge, the potential of this filler has not been identified sufficiently, therefore, further prospects are required to reveal its beneficial effects for different purposes. On the other hand, the earlier studies realized by us and our collaborators [2] were mostly limited to a specific PLA from the first generation (i.e., an amorphous PLA matrix, not available commercially today) and to the use of synthetic gypsum by-product as obtained directly from the LA process. Today, various PLA grades are available, characterized by different molecular weights, L-lactic acid isomer purity, as well as the presence of special additives, paving the way for new possibilities in applications [2,3]. In addition, it is known that the choice of PLA matrix is of high importance when following different techniques in processing (injection molding (IM), extrusion, 3D printing, etc.), aspects less considered in the previous studies. Still, as already mentioned, PLA is often in an amorphous state after any processing step, such as extrusion or IM, showing limited or poor heat resistance (low heat distortion temperature (HDT)). In fact, this is a kind of 'Achilles' heel', limiting PLA use in engineering/technical applications [40]. This parameter (i.e., the degree of crystallinity (DC)) is particularly essential to control the PLA degradation rate, thermal resistance, as well as mechanical, optical, and barrier properties. The adequate choice of PLA matrix, and the combination with a filler that can increase the crystallization rate of PLA, could open the way to better performing composites designed for engineering applications requiring resistance at high temperature. Based on the prior art, the main goal of this study is to present recent experimental results and advances regarding the properties of mineral-filled biocomposites produced with CS AII made from natural gypsum and using PLA matrices of different molecular weights and isomer purity, mainly intended for processing by IM or extrusion. This will allow determining that the adequate choice of the PLA matrix is of key importance from the perspective of the application. Moreover, because CS AII is less known as a performant filler, one additional goal is to increase the interest in its utilization by experimentally proving its stability under harder testing conditions, such as following mixing in water as slurry. Regarding the PLA-AII composites, the study is focused on the characterization of their morphology and evidence for enhancement and tuning of thermal and mechanical properties connected to the nature of PLA and amounts of filler. However, it reveals some unexpected performances for special compositions, i.e., a remarkable increase in both DC and Vicat softening temperature (VST). Due to their properties, these "green" composites are of potential interest for utilization in the biomedical sector (e.g., via 3D printing) as biodegradable/rigid packaging and in technical applications requiring rigidity, heat resistance, and dimensional stability. Materials Three distinct PLA grades were investigated in the frame of the experimental program to consider different applications of and techniques for processing: PLA 4032D (supplier: NatureWorks LLC, Blair, NE, USA), is a PLA of high molecular weight and melt viscosity designed for the extrusion of films and the realization of PLA blends. It is characterized by low D-isomer content (1.4%) and a melting temperature (T m ) in the range of 155 to 170 • C and is abbreviated as PLA1. 2. PLA2: PLA 3051D is an IM grade for realization of products requiring low HDT (supplier NatureWorks LLC) characterized by higher D-isomer content (i.e., 4.3%) and a T m in the range of 150 to 165 • C, according to the technical sheet of the supplier. 3. PLA3: PLA Luminy L105 (supplied by Total Corbion PLA (actually, TotalEnergies Corbion), Gorinchem, The Netherlands) is characterized by high L-isomer purity (L-isomer ≥99%, and implicit by very low content of D-isomer, <1%) and T m of ca. 175 • C. PLA3 is a high flow PLA for spinning and IM, allowing the production of items with thin walls. Table 1 shows the rheological information (i.e., melt flow rate (MFR) values) and the results of molecular characterizations by gel permeation chromatography (GPC), also referred to as size-exclusion chromatography (SEC) obtained using Agilent 1200 Series GPC-SEC System (Agilent Technologies, Santa Clara, CA, USA) and chloroform (at 30 • C) as the solvent (M w being the weight-average molar mass expressed in polystyrene equivalent, the dispersity being the M w /M n ratio between the weight-and number-average molar masses). CaSO 4 β-anhydrite II (CS AII) delivered as "ToroWhite" filler was kindly supplied by Toro Gips S.L. (Spain). According to the information provided by supplier, these products are obtained from selected food and pharma grades of high purity natural gypsum. They are characterized by high whiteness/lightness (L*), AII being an alternative of choice as a white pigment (TiO 2 ) extender. Color measurements performed in the CIELab mode (illuminate D65, 10 • ) with a SpectroDens Premium (TECHKON GmbH, Königstein, Germany) have evidenced the high lightness of AII, i.e., L* of 95.8. Samples of CS dihydrate were also obtained from the same supplier for specific comparative tests (vide infra). Figure 1a,b show selected SEM pictures to illustrate the morphology of AII filler used in this study for melt-blending with PLAs. The granulometry of AII sample was characterized by Dynamic Light Scattering (DLS) using a Mastersizer 3000 laser particle size analyzer (Malvern Panalytical Ltd., Malvern, UK), the microparticles having a D v50 of 5.4 µm and a D v90 of 14.9 µm. isomer ≥ 99%, and implicit by very low content of D-isomer, <1%) and Tm of ca. 175 °C. PLA3 is a high flow PLA for spinning and IM, allowing the production of items with thin walls. Table 1 shows the rheological information (i.e., melt flow rate (MFR) values) and the results of molecular characterizations by gel permeation chromatography (GPC), also referred to as size-exclusion chromatography (SEC) obtained using Agilent 1200 Series GPC-SEC System (Agilent Technologies, Santa Clara, CA, USA) and chloroform (at 30 °C) as the solvent (Mw being the weight-average molar mass expressed in polystyrene equivalent, the dispersity being the Mw/Mn ratio between the weight-and number-average molar masses). CaSO4 β-anhydrite II (CS AII) delivered as "ToroWhite" filler was kindly supplied by Toro Gips S.L. (Spain). According to the information provided by supplier, these products are obtained from selected food and pharma grades of high purity natural gypsum. They are characterized by high whiteness/lightness (L*), AII being an alternative of choice as a white pigment (TiO2) extender. Color measurements performed in the CIELab mode (illuminate D65, 10°) with a SpectroDens Premium (TECHKON GmbH, Königstein, Germany) have evidenced the high lightness of AII, i.e., L* of 95.8. Samples of CS dihydrate were also obtained from the same supplier for specific comparative tests (vide infra). Figure 1a,b show selected SEM pictures to illustrate the morphology of AII filler used in this study for melt-blending with PLAs. The granulometry of AII sample was characterized by Dynamic Light Scattering (DLS) using a Mastersizer 3000 laser particle size analyzer (Malvern Panalytical Ltd., Malvern, UK), the microparticles having a Dv50 of 5.4 µm and a Dv90 of 14.9 µm. Specific Methods and Analyses to Demonstrate the Stability of AII as Filler To evidence the distinct characteristics of AII, CS dihydrate was thermally treated during 2 h at different temperatures (140 °C, 200 °C, and 500 °C) in a Nabertherm B400 furnace (Nabertherm GmbH, Lilienthal, Germany) to obtain different forms of CS, respectively, CS hemihydrate, CS β-anhydrite III (AIII), and CS β-anhydrite AII ( Figure 2). Then, Specific Methods and Analyses to Demonstrate the Stability of AII as Filler To evidence the distinct characteristics of AII, CS dihydrate was thermally treated during 2 h at different temperatures (140 • C, 200 • C, and 500 • C) in a Nabertherm B400 furnace (Nabertherm GmbH, Lilienthal, Germany) to obtain different forms of CS, respectively, CS hemihydrate, CS β-anhydrite III (AIII), and CS β-anhydrite AII ( Figure 2). Then, the so-produced samples were characterized using TGA and XRD techniques. Furthermore, to test the stability of AII even after immersion in water, AII powders were mixed as a slurry (20%) in demineralized water for 24 h. The solid fraction (AII) was separated by sedimentation and centrifugation, maintained 24 h under a fume hood at room temperature (RT), and then dried under vacuum at low temperature (50 • C) for 2 h to remove the residual moisture. by sedimentation and centrifugation, maintained 24 h under a fume hood at room temperature (RT), and then dried under vacuum at low temperature (50 °C) for 2 h to remove the residual moisture. On the other hand, for sake of comparison, similar experiments were performed with CS (hemihydrate) and AIII, but these fillers were found to be extremely sensitive to water [41], leading to the formation of solid "blocky" structures of CS dihydrate ( Figure 2). Preparation of PLA-AII Composites All materials were carefully dried at 70 °C overnight to limit PLA degradation during processing at high temperature due to the presence of moisture. Starting from dry-mixed PLAs and CS (AII) blends, PLA composites were obtained by melt-compounding each of the three polyester matrices with 20% and 40 wt.% AII at 200 °C, using a Brabender bench scale kneader (Brabender GmbH &. Co. KG, Duisburg, Germany) equipped with "came" blades (conditions of processing: feeding at 30 rpm for 3 min, followed by 7 min meltmixing at 100 rpm). The evolution of mechanical torque during the melt-mixing of PLAs On the other hand, for sake of comparison, similar experiments were performed with CS (hemihydrate) and AIII, but these fillers were found to be extremely sensitive to water [41], leading to the formation of solid "blocky" structures of CS dihydrate ( Figure 2). Preparation of PLA-AII Composites All materials were carefully dried at 70 • C overnight to limit PLA degradation during processing at high temperature due to the presence of moisture. Starting from dry-mixed PLAs and CS (AII) blends, PLA composites were obtained by melt-compounding each of the three polyester matrices with 20% and 40 wt.% AII at 200 • C, using a Brabender bench scale kneader (Brabender GmbH &. Co. KG, Duisburg, Germany) equipped with "came" blades (conditions of processing: feeding at 30 rpm for 3 min, followed by 7 min melt-mixing at 100 rpm). The evolution of mechanical torque during the melt-mixing of PLAs and PLA−AII composites was followed and considered as primary rheological information ( Figure 3). and PLA−AII composites was followed and considered as primary rheological information ( Figure 3). In the subsequent step, the materials recovered after the melt-compounding process (after cooling in nitrogen liquid) were ground with a Pulverisette 19 (Fritsch GMBH, Idar-Oberstein, Germany), whereas the specimens for mechanical characterizations were obtained by IM, using a DSM micro injection molding (IM) machine (now Xplore, Sittard, The Netherlands), using the following processing conditions: temperature of IM = 200 °C, mold temperature = 70 °C. For the sake of comparison, neat PLAs were processed using similar conditions as with the mineral filled composites. Throughout this contribution, all percentages are given as weight percent (wt.%). Methods of Characterization (a) Thermogravimetric analyses (TGA) were performed using a TGA Q50 (TA Instruments, New Castle, DE, USA) by heating the samples under nitrogen or air from room temperature (RT) up to a max. 800 °C (platinum pans, heating ramp of 20 °C/min, 60 cm 3 /min gas flow rate). (b) Differential Scanning Calorimetry (DSC) measurements were accomplished by using a DSC Q200 from TA Instruments (New Castle, DE, USA) under nitrogen flow. In the case of PLAs and PLA composites, the procedure was as follows: first heating scan at 10 °C/min from 0 °C up to 200 °C, isotherm at these temperature for 2 min, then cooling by 10 °C/min to −20 °C, and finally, a second heating scan from −20 to 200 °C at 10 °C/min. The first scan was used to erase the prior thermal history of the polymer samples. The events of interest linked to the crystallization of PLA during DSC cooling scan, i.e., the crystallization temperatures (Tc) and the enthalpies of crystallization (ΔHc), were quantified using TA Instruments Universal Analysis 2000 software (Version 3.9A (TA Instruments-Waters LLC, New Castle, DE, USA)). Noteworthy, all data were normalized to the amounts of PLA from the samples. The thermal parameters were also evaluated in the second DSC heating scan and abbreviated as follows: glass transition temperature (Tg), cold crystallization temperature (Tcc), enthalpy of cold crystallization (ΔHcc), melting peak temperature (Tm), melting enthalpy (ΔHm), and final DC (χ). The DC (degree of crystallinity) was determined using the following general equation: In the subsequent step, the materials recovered after the melt-compounding process (after cooling in nitrogen liquid) were ground with a Pulverisette 19 (Fritsch GMBH, Idar-Oberstein, Germany), whereas the specimens for mechanical characterizations were obtained by IM, using a DSM micro injection molding (IM) machine (now Xplore, Sittard, The Netherlands), using the following processing conditions: temperature of IM = 200 • C, mold temperature = 70 • C. For the sake of comparison, neat PLAs were processed using similar conditions as with the mineral filled composites. Throughout this contribution, all percentages are given as weight percent (wt.%). Methods of Characterization (a) Thermogravimetric analyses (TGA) were performed using a TGA Q50 (TA Instruments, New Castle, DE, USA) by heating the samples under nitrogen or air from room temperature (RT) up to a max. 800 • C (platinum pans, heating ramp of 20 • C/min, 60 cm 3 /min gas flow rate). (b) Differential Scanning Calorimetry (DSC) measurements were accomplished by using a DSC Q200 from TA Instruments (New Castle, DE, USA) under nitrogen flow. In the case of PLAs and PLA composites, the procedure was as follows: first heating scan at 10 • C/min from 0 • C up to 200 • C, isotherm at these temperature for 2 min, then cooling by 10 • C/min to −20 • C, and finally, a second heating scan from −20 to 200 • C at 10 • C/min. The first scan was used to erase the prior thermal history of the polymer samples. The events of interest linked to the crystallization of PLA during DSC cooling scan, i.e., the crystallization temperatures (T c ) and the enthalpies of crystallization (∆H c ), were quantified using TA Instruments Universal Analysis 2000 software (Version 3.9A (TA Instruments-Waters LLC, New Castle, DE, USA)). Noteworthy, all data were normalized to the amounts of PLA from the samples. The thermal parameters were also evaluated in the second DSC heating scan and abbreviated as follows: glass transition temperature (T g ), cold crystallization temperature (T cc ), enthalpy of cold crystallization (∆H cc ), melting peak temperature (T m ), melting enthalpy (∆H m ), and final DC (χ). The DC (degree of crystallinity) was determined using the following general equation: where ∆H m and ∆H cc are the enthalpies of melting and of cold-crystallization, respectively, W is the weight fraction of PLA in composites, and ∆H 0 m is the melting enthalpy of 100% crystalline PLA considered 93 J/g [42]. Notable, the DC was calculated by subtracting the enthalpy of cold crystallization (∆H cc ) and of pre-melt crystallization (if it was evidenced on DSC curves), from the enthalpy of melting (∆H m ). To have information about the DC of specimens produced by IM, the properties of PLA and PLA-AII composites of interest were evaluated following the first DSC scan. The DSC technique was also used to evidence the transformation of gypsum by heating to 400 • C (the limit of instrument). (c) Mechanical testing: Tensile tests were performed with a Lloyd LR 10K bench machine (Lloyd Instruments Ltd., Bognor Regis, West Sussex, UK) according to the ASTM D638-02a norm on specimens-type V at a crosshead speed of 1 mm/min. For the characterization of Izod impact resistance, a Ray-Ran 2500 pendulum impact tester and a Ray-Ran 1900 notching apparatus (Ray-Ran Test Equipment Ltd., Warwickshire, UK) were used according to ASTM D256 norm (method A, 3.46 m/s impact speed, 0.668 kg hammer). For both tensile and impact tests, the specimens produced by IM were previously conditioned for at least 48 h at 23 ± 2 • C under relative humidity of 50 ± 5%, and the values were averaged over minimum five measurements. (d) DMA (Dynamic Mechanical Analysis) were performed on rectangular specimens (60 × 12 × 2 mm 3 ) obtained by IM (DSM micro-IM machine) using a DMA 2980 apparatus (TA Instruments, New Castle, DE, USA) in dual cantilever bending mode. The dynamic storage and loss moduli (E and E", respectively) were determined at a constant frequency of 1 Hz and amplitude of 20 µm as a function of temperature from −20 • C to 140 • C, at a heating rate of 3 • C/min. (e) Vicat softening temperature (VST) measurements were performed according to ASTM D1525, using HDT/Vicat 3-300 Allround A1 (ZwickRoell Gmbh & Co, Ulm, Germany) equipment. The samples with thickness of 3.2 mm were rectangular shaped (12 × 10 mm 2 ). All samples were evaluated under a load of 1000 g and at a heating rate of 120 • C/h using minimum 3 specimens. (f) Scanning Electron Microscopy (SEM) analyses on the PLA samples, previously cryofractured at a liquid nitrogen temperature, were performed using a Philips XL scanning electronic microscope (Eindhoven, Netherlands) at various accelerated voltages and magnitudes. For better information and easy interpretation, the SEM was equipped for both secondary (SE) and back scattered electrons (BSE) imaging. Reported microphotographs represent typical morphologies as observed at, at least, three distinct locations. SEM analyses of AII microparticles were performed at different magnifications in the SE mode (5 kV accelerated voltage). SEM analyses were also performed on the surfaces of selected specimens fractured by tensile or impact testing. New Evidence of CS AII Stability as Filler for the Industry of Polymer Composites It is a noteworthy reminder that CS is available in several forms: dihydrate-CaSO 4 ·2H 2 O (commonly known as gypsum), hemihydrate-CaSO 4 0.5H 2 O (Plaster of Paris, stucco, or bassanite), and different types of anhydrite [43][44][45]. The dehydration of gypsum (CS dihydrate) above 100 • C at low pressure (vacuum) or under air at atmospheric pressure, favors β-CS hemihydrate formation. An increase in the temperature to about 200 • C allows producing so called β-anhydrite III (β-AIII)-which is not stable, whereas calcination at temperatures higher than 350 • C (e.g., at 500-800 • C in an industrial process) allows obtaining stable β-anhydrite II (abbreviated as AII). The CS phases obtained by progressive dehydration and calcination of gypsum at higher temperatures are in the following order [43]: dihydrate → hemihydrate → anhydrite III → anhydrite II → anhydrite I (at temperatures higher than 1180 • C). DSC is a powerful tool of analysis that can be considered to evidence the thermal transformations of CS dihydrate during heating ( Figure 4). Accordingly, in the first step, the gypsum was transformed at about 140 • C in β-CS hemihydrate. When the hemihydrate was heated at higher temperature it was converted into "soluble" anhydrite-AIII (endothermal process, a shoulder was observed at about 160 • C on DSC curve), and above 350 • C, "insoluble" anhydrite (β-AII) was generated, as evidenced by the exothermal transformation on DSC curves. about 200 °C allows producing so called β-anhydrite III (β-AIII)-which is not stable whereas calcination at temperatures higher than 350 °C (e.g., at 500-800 °C in an industria process) allows obtaining stable β-anhydrite II (abbreviated as AII). The CS phases ob tained by progressive dehydration and calcination of gypsum at higher temperatures are in the following order [43]: dihydrate → hemihydrate → anhydrite III → anhydrite II → anhydrite I (at temperatures higher than 1180 °C). DSC is a powerful tool of analysis tha can be considered to evidence the thermal transformations of CS dihydrate during heating ( Figure 4). Accordingly, in the first step, the gypsum was transformed at about 140 °C in β-CS hemihydrate. When the hemihydrate was heated at higher temperature it was con verted into "soluble" anhydrite-AIII (endothermal process, a shoulder was observed a about 160 °C on DSC curve), and above 350 °C, "insoluble" anhydrite (β-AII) was gener ated, as evidenced by the exothermal transformation on DSC curves. To allow the use of CS in the production of polymer composites (e.g., based on poly esters, such as PLA), we restate that it is of prime importance to dry (dehydrate) the CS dihydrate or hemihydrate prior melt-compounding, or the use of stable anhydrite form is required, keeping in mind the importance of minimizing free moisture. Indeed, PLA i stable in the molten state provided that it is adequately stabilized and dried to have a maximum acceptable water content of 250 ppm, or even below 50 ppm, in the case o processing at high temperature [3]. Moreover, following a comparative study, it has been reported elsewhere that synthetic β-AII (made from gypsum from the LA production pro cess), is much better suited for melt-blending with PLA than β-AIII, which is by far too sensitive to atmospheric water absorption [29]. Indeed, AIII has a dramatically quick up take of water, which was evidenced at the start of the thermogravimetric analyses, thu its rapid transformation to CS hydrated forms can be assumed. Moreover, in the case o AIII analyzed by XRD, it was reported that the high humidity triggered an instant trans formation into CS hemihydrate (bassanite) [46]. Accordingly, due to its instability, AIII i not recommended for melt-blending with polymers with high sensitivity to degradation by hydrolysis during processing at high temperature. Figure 5a shows the comparison of thermogravimetric analyses (TG) of CS deriva tives produced from natural gypsum by thermal treatments at different temperatures. AI showed a particularly good thermal stability in all ranges of temperature (with a weigh loss lower than 1% by heating to 600 °C). On the contrary, CS dihydrate and CS hemihy drate record a weight loss of 20-21% and 6-7%, respectively, in the dehydration proces step (below 200 °C). To allow the use of CS in the production of polymer composites (e.g., based on polyesters, such as PLA), we restate that it is of prime importance to dry (dehydrate) the CS dihydrate or hemihydrate prior melt-compounding, or the use of stable anhydrite forms is required, keeping in mind the importance of minimizing free moisture. Indeed, PLA is stable in the molten state provided that it is adequately stabilized and dried to have a maximum acceptable water content of 250 ppm, or even below 50 ppm, in the case of processing at high temperature [3]. Moreover, following a comparative study, it has been reported elsewhere that synthetic β-AII (made from gypsum from the LA production process), is much better suited for melt-blending with PLA than β-AIII, which is by far too sensitive to atmospheric water absorption [29]. Indeed, AIII has a dramatically quick uptake of water, which was evidenced at the start of the thermogravimetric analyses, thus its rapid transformation to CS hydrated forms can be assumed. Moreover, in the case of AIII analyzed by XRD, it was reported that the high humidity triggered an instant transformation into CS hemihydrate (bassanite) [46]. Accordingly, due to its instability, AIII is not recommended for melt-blending with polymers with high sensitivity to degradation by hydrolysis during processing at high temperature. Figure 5a shows the comparison of thermogravimetric analyses (TG) of CS derivatives produced from natural gypsum by thermal treatments at different temperatures. AII showed a particularly good thermal stability in all ranges of temperature (with a weight loss lower than 1% by heating to 600 • C). On the contrary, CS dihydrate and CS hemihydrate record a weight loss of 20-21% and 6-7%, respectively, in the dehydration process step (below 200 • C). The comparative TGAs of the recovered products after the slurry tests (Figure 5b) evidence only some low content of superficial water/moisture in the sample labelled "AII", without fully excluding the presence of some traces of sub-hydrates. On the other hand, the total rehydration with the formation of gypsum in the case of CS hemihydrate and of AIII, was confirmed by the high amount of water lost during heating (21-22%). Interestingly, the XRD technique is also largely used to evidence the differences between the various CS derivatives [46]. Figure 6 shows the comparative XRD patterns of products obtained following the transformation of gypsum at different calcination temperatures (see experimental section) to obtain β-CS hemihydrate and β-AII. AIII was not included here due to its sensitivity to moisture and its quick transformation into hydrated forms. For CS dihydrate (gypsum), five major diffraction peaks, i.e., (020), (021), (130), (041), and (−221), have been reported elsewhere [47]. Positions (2θ) of these peaks were confirmed in the present study, respectively, at 11.6°, 20.7°, 23.4°, 29.1°, and 31.1°, whereas additional XRD peaks were observed at higher 2θ angle. The diffractogram of β-CS hemihydrate featured specific peaks at 2θ ≈ 14.7, 25.6, 29.7, and 31.9° [48]. The presence of a supplementary peak at 2θ ≈ 11.6 (seen also for CS dihydrate) is reasonably ascribed to the inherent absorption of moisture, leading to traces of other CS hydrated forms. After the dehydration and thermal treatments (i.e., at 500 °C) of gypsum (monoclinic crystal system), the crystalline structure of the obtained CS β-anhydrite II (AII) was different, i.e., orthorhombic [49]. It was characterized by only one intense peak at 2θ ≈ 25.4° and a number of smaller ones at higher scattering angles [31]. The comparative TGAs of the recovered products after the slurry tests (Figure 5b) evidence only some low content of superficial water/moisture in the sample labelled "AII", without fully excluding the presence of some traces of sub-hydrates. On the other hand, the total rehydration with the formation of gypsum in the case of CS hemihydrate and of AIII, was confirmed by the high amount of water lost during heating (21-22%). Interestingly, the XRD technique is also largely used to evidence the differences between the various CS derivatives [46]. Figure 6 shows the comparative XRD patterns of products obtained following the transformation of gypsum at different calcination temperatures (see experimental section) to obtain β-CS hemihydrate and β-AII. AIII was not included here due to its sensitivity to moisture and its quick transformation into hydrated forms. For CS dihydrate (gypsum), five major diffraction peaks, i.e., (020), (021), (130), (041), and (−221), have been reported elsewhere [47]. Positions (2θ) of these peaks were confirmed in the present study, respectively, at 11.6 • , 20.7 • , 23.4 • , 29.1 • , and 31.1 • , whereas additional XRD peaks were observed at higher 2θ angle. On the other hand, Figure 7 shows the results of XRD analyses of selected samples recovered after the slurry tests. Accordingly, the following was found: when mixing in water only AII was stable, keeping its original crystalline structure as evidenced by the same XRD peaks, while the other CS forms (such as AIII and CS hemihydrate) were rehydrated to gypsum (CS dihydrate). The diffractogram of β-CS hemihydrate featured specific peaks at 2θ ≈ 14.7, 25.6, 29.7, and 31.9 • [48]. The presence of a supplementary peak at 2θ ≈ 11.6 (seen also for CS dihydrate) is reasonably ascribed to the inherent absorption of moisture, leading to traces of other CS hydrated forms. After the dehydration and thermal treatments (i.e., at 500 • C) of gypsum (monoclinic crystal system), the crystalline structure of the obtained CS β-anhydrite II (AII) was different, i.e., orthorhombic [49]. It was characterized by only one intense peak at 2θ ≈ 25.4 • and a number of smaller ones at higher scattering angles [31]. On the other hand, Figure 7 shows the results of XRD analyses of selected samples recovered after the slurry tests. Accordingly, the following was found: when mixing in water only AII was stable, keeping its original crystalline structure as evidenced by the same XRD peaks, while the other CS forms (such as AIII and CS hemihydrate) were rehydrated to gypsum (CS dihydrate). On the other hand, Figure 7 shows the results of XRD analyses of selected sample recovered after the slurry tests. Accordingly, the following was found: when mixing i water only AII was stable, keeping its original crystalline structure as evidenced by th same XRD peaks, while the other CS forms (such as AIII and CS hemihydrate) were rehy drated to gypsum (CS dihydrate). These new results respond to the current questions asked by potential users requirin evidence of AII stability following contact with moisture/water. AII exhibited the closes packing of ions, which makes it highly dense and strong, whereas the absence of empt channels means it reacts slowly with water [43]. By considering its overall properties (high thermal stability, whiteness, low hardnes (Mohs), very low solubility/rate of rehydration, others), AII can be considered a promisin natural filler for the industry of polymer composites. Characterization of PLA−AII Composites First, it is important to point out that the results discussed hereinafter concern th use of AII without any surface treatments, whereas the PLAs used as polymer matrice These new results respond to the current questions asked by potential users requiring evidence of AII stability following contact with moisture/water. AII exhibited the closest packing of ions, which makes it highly dense and strong, whereas the absence of empty channels means it reacts slowly with water [43]. By considering its overall properties (high thermal stability, whiteness, low hardness (Mohs), very low solubility/rate of rehydration, others), AII can be considered a promising natural filler for the industry of polymer composites. Characterization of PLA−AII Composites First, it is important to point out that the results discussed hereinafter concern the use of AII without any surface treatments, whereas the PLAs used as polymer matrices are characterized by different molecular weights (M WPLA1 > M WPLA2 > M WPLA3 ) and rheology, to allow adapted melt processing techniques (e.g., extrusion or IM). Furthermore, for PLA2 and PLA3 (PLA grades of high fluidity designed for IM), the attention will be focused on the differences linked to the purity in L-lactic acid enantiomer. By considering the evolution of torque values during melt mixing process as primary rheological information, in all cases the addition of filler (AII) into PLA led to the increase in mechanical torque/melt viscosity. Furthermore, the torque was clearly determined by the molecular weights of PLAs (see Figure 3, experimental part) and the following order was seen by melt-mixing at the temperature of 200 • C: PLA1-40% AII > PLA1 > PLA2-40% AII > PLA2 > PLA3-40% AII > PLA3. Morphology of PLA-AII Composites After the grinding process, the microparticles of AII used for this study had a volume median diameter of~5 µm (analysis of granulometry by DLS). The particulate filler was characterized by a low aspect ratio, whereas a shared morphology, i.e., particles with irregular shape and fibrillar/flaky aspect due to the cleavage of CS layers, was evidenced by SEM (Figure 1, section Materials). Regarding the morphology of composites, for better evidence of filler distribution through PLA matrix, SEM imaging was performed using back scattered electrons (BSE) to obtain a higher phase contrast. Figure 8a-h shows representative SEM-BSE images of cryofractured surfaces of PLA−AII composites with 20-40% filler. The results of thermal characterizations by TGA ( Table 2) allow concluding that the addition of AII into different PLAs primarily leads to composites characterized by similar or better thermal properties than those of neat polymers processed under similar condi- Well-distributed/dispersed particles, with various geometries and quite broad size distribution were evidenced at the surface of cryofractured specimens. A cryofracture characterizing moderate, but effective adhesion between filler (AII) and PLA, can be assumed by considering the overall SEM images, but also considering the mechanical performances of composites. It is worth a reminder that such quality of dispersion was obtained without any previous surface treatment of filler. However, better individual particles dispersion was easily obtained at lower filler content (20% AII) (Figure 8a,b), whereas at high filling (40 wt.%), the presence of some aggregates/some zones with poorer dispersion was not totally excluded. Furthermore, it is difficult to conclude that following the melt-compounding with internal mixers led to important differences regarding the morphology of composites (i.e., in relation to the type of PLA matrix and the rheology of blends, which is essentially determined at similar amounts of filler by the molecular weights of PLA). Thermogravimetric Analysis (TGA) The results of thermal characterizations by TGA ( Table 2) allow concluding that the addition of AII into different PLAs primarily leads to composites characterized by similar or better thermal properties than those of neat polymers processed under similar conditions. Interestingly, following the comparison of processed PLAs, PLA1 showed better thermal characteristics than PLA2 and PLA3. This difference was also seen in the case of composites, and it is reasonably ascribed to the higher molecular weights of PLA1. An increase in the onset of thermal degradation (T 5% , temperature corresponding to 5% weight loss) and of maximum decomposition temperature (T d , from max. D-TG) was found as a general tendency by filling PLAs with up to 40% AII. However, more spectacular changes were observed when PLA2 or PLA3 was used as the polymer matrix. Furthermore, from the D-TG curves, it is observed that the rate of thermal degradation (wt.%/ • C) at the temperature corresponding to the max. rate of degradation was much reduced/delayed in the case of composites, in quite good correlation with the amounts of filler. The enhancement of thermal stability by filling PLA with AII is a key-property in the perspective of processing and further application of such materials. Differential Scanning Calorimetry (DSC) It is generally recognized that the PLAs of higher L-isomer purity (less than 1% D-isomer) are characterized by higher kinetics of crystallization, properties that can be improved in the presence of nucleating agents, to allow the utilization in applications requiring high HDT [50]. In contrast, PLA resins of higher D-isomer content (4-8%) are more suitable for thermoformed, extruded, and blow molded products, since they are more easily processed when the crystallinity is lower [3]. First, from the DSC analyses (Table 3 and Figure 9a,b) it was observed that the addition of AII had beneficial effects on the crystallization of PLA, distinctly evidenced for PLA1 and especially for PLA3 as a polymer matrix. Table 3. Comparative DSC results of neat PLAs and PLA-AII composites obtained using different PLA grades as polymer matrix (second DSC heating scan, by 10 • C/min). Still, using PLA2 with higher D-enantiomer content (4.3%), there were no important changes in crystallinity (Table 3 and Figure S2 from the Supplementary Material). The DC of neat PLA2 and of composites remained very low (DC < 2%) and only were slightly Moreover, the DSC curves obtained during cooling and second heating scans clearly revealed that the association of AII with PLA3 of high L-isomer purity (≥99%) characterized by medium molecular weights (macromolecular chains with increased mobility during cooling process), yielded to composites characterized by surprising kinetics of crystallization and a high DC. In fact, the DC remarkably increased from 31% (neat-PLA3 processed) to about 60% in composites (PLA3−(20-40)% AII). Moreover, the effect of the filler was also significant using PLA1 as the matrix (PLA of higher molecular weights, D-isomer = 1.4%), the composites being characterized by better/moderate crystallization ability, determined by the level of filler: the DC of PLA1 (1.8%) increased in composites up to about 20%. Sample (%, by Weight) Still, using PLA2 with higher D-enantiomer content (4.3%), there were no important changes in crystallinity (Table 3 and Figure S2 from the Supplementary Material). The DC of neat PLA2 and of composites remained very low (DC < 2%) and only were slightly affected by the amount of filler. Regarding the cold crystallization process recorded in the second DSC scan, PLA2 had a lower crystallization ability (T cc determined at high temperatures, i.e., 133-135 • C), whereas for PLA1 samples, T cc decreased with the amount of filler, from 116 • C to 106 • C. However, by comparing the neat PLAs and their respective composites, for most of the samples, there was no significant modification of glass transition and melting temperatures (T g and T m ). In relation to the results of DSC characterizations, it was once more proved that the highest kinetics of crystallization/DC are obtained by reducing the molecular weights of PLA and using PLA of higher L-enantiomer purity [51], i.e., for PLA3. Still, the addition of AII into PLA3 leads to composites of interest for technical applications (by considering the overall performances of composites), because they show a superior DC, properties reasonably ascribed to the nucleating effects of filler and inherent characteristics of the polymer matrix. Mechanical Characterizations The specimens for mechanical testing ( Figure 10) were produced by IM (see experimental section). The strength of particulate-filled polymer composites depends, to a great extent, on the properties of the matrix, the interfacial adhesion between the matrix and dispersed phase, the filler shape, size, and amount [52]. Noteworthy, comprehensive studies regarding the interfacial adhesion between the PLA and microfiller (AII) have been realized using different techniques by the research group of Pukánszky B. and collab. [53]. x 15 of 22 extent, on the properties of the matrix, the interfacial adhesion between the matrix and dispersed phase, the filler shape, size, and amount [52]. Noteworthy, comprehensive studies regarding the interfacial adhesion between the PLA and microfiller (AII) have been realized using different techniques by the research group of Pukánszky B. and collab. [53]. (Figure 11b) was significantly enhanced (from ≈ 2000 MPa, neat PLAs) to a value of about 3000 MPa by filling with 40% AII, without any significant influence linked to the nature of PLAs. Moreover, when we compared the values of tensile strength (σ) of neat PLAs and PLA-AII composites, it was observed that they followed the same order (σPLA1 > σPLA2 > σPLA3) as that of molecular weights (MwPLA1 > MwPLA2 > MwPLA3). Accordingly, at high filling (40% AII), it is worth noting the tensile strength of composites obtained using PLA1 and PLA2 as matrix (51 MPa and 47 MPa, respectively). As expected, all PLAs had low nominal strain at break (about 5%), values that decreased to 2-3% for composites. (Figure 11b) was significantly enhanced (from ≈ 2000 MPa, neat PLAs) to a value of about 3000 MPa by filling with 40% AII, without any significant influence linked to the nature of PLAs. Moreover, when we compared the values of tensile strength (σ) of neat PLAs and PLA-AII composites, it was observed that they followed the same order (σ PLA1 > σ PLA2 > σ PLA3 ) as that of molecular weights (M wPLA1 > M wPLA2 > M wPLA3 ). Accordingly, at high filling (40% AII), it is worth noting the tensile strength of composites obtained using PLA1 and PLA2 as matrix (51 MPa and 47 MPa, respectively). As expected, all PLAs had low nominal strain at break (about 5%), values that decreased to 2-3% for composites. compared the values of tensile strength (σ) of neat PLAs and PLA-AII composites, it was observed that they followed the same order (σPLA1 > σPLA2 > σPLA3) as that of molecular weights (MwPLA1 > MwPLA2 > MwPLA3). Accordingly, at high filling (40% AII), it is worth noting the tensile strength of composites obtained using PLA1 and PLA2 as matrix (51 MPa and 47 MPa, respectively). As expected, all PLAs had low nominal strain at break (about 5%), values that decreased to 2-3% for composites. Regarding the impact resistance, it has been reported in the literature that in some cases, well dispersed particulate fillers, at optimal loadings or having specific surface treatment, can contribute to the increase in impact properties [54,55]. Interestingly, the Izod impact resistance of composites (Figure 11c) was slightly improved by incorporating 20% AII in each of the three studied PLAs, whereas a further increase in filler content to 40% led to an important reduction in this parameter. In the case of PLA−20% AII composites, it is once more proved that well distributed/dispersed rigid/particulate microparticles can contribute to the dissipation of impact energy by reducing the crack propagation by different mechanisms (e.g., by crack-bridging [56] or crack pinning, which require certain adhesion between polymer and filler; whereas the debonding at matrix-particle interface is also for consideration as mechanism [48]). However, additional SEM images performed on fractured samples by tensile or impact testing (Supplementary Material, Figures S3 and S4) suggested that at the interface zone (PLA matrix-filler) were seen both regions, accounting for the good/moderate adhesion due to the maintaining of intimate physical contact between constituents after the mechanical solicitation (which explains the noticeable tensile properties), and zones of debonding or shear yielding, traditionally ascribed to a toughening mechanism with rigid particles, with contribution in reducing the cracking and in dissipating the energy of impact solicitation [37,54,55]. On the other hand, at higher filling (i.e., 40 wt.%), due to the inherent presence of more heterogeneous (mechanically weak) regions, e.g., aggregates of microparticles, they may act as stress concentrators, causing a decrease in tensile and impact resistance. Therefore, for applications in which the impact strength is a key-concern, these composites need to be modified to fulfil the industry requirements [2]. Modification of filler (AII) by special surface treatments or/and the addition of a third component into PLA-AII composites, Regarding the impact resistance, it has been reported in the literature that in some cases, well dispersed particulate fillers, at optimal loadings or having specific surface treatment, can contribute to the increase in impact properties [54,55]. Interestingly, the Izod impact resistance of composites (Figure 11c) was slightly improved by incorporating 20% AII in each of the three studied PLAs, whereas a further increase in filler content to 40% led to an important reduction in this parameter. In the case of PLA−20% AII composites, it is once more proved that well distributed/dispersed rigid/particulate microparticles can contribute to the dissipation of impact energy by reducing the crack propagation by different mechanisms (e.g., by crack-bridging [56] or crack pinning, which require certain adhesion between polymer and filler; whereas the debonding at matrix-particle interface is also for consideration as mechanism [48]). However, additional SEM images performed on fractured samples by tensile or impact testing (Supplementary Material, Figures S3 and S4) suggested that at the interface zone (PLA matrix-filler) were seen both regions, accounting for the good/moderate adhesion due to the maintaining of intimate physical contact between constituents after the mechanical solicitation (which explains the noticeable tensile properties), and zones of debonding or shear yielding, traditionally ascribed to a toughening mechanism with rigid particles, with contribution in reducing the cracking and in dissipating the energy of impact solicitation [37,54,55]. On the other hand, at higher filling (i.e., 40 wt.%), due to the inherent presence of more heterogeneous (mechanically weak) regions, e.g., aggregates of microparticles, they may act as stress concentrators, causing a decrease in tensile and impact resistance. Therefore, for applications in which the impact strength is a key-concern, these composites need to be modified to fulfil the industry requirements [2]. Modification of filler (AII) by special surface treatments or/and the addition of a third component into PLA-AII composites, i.e., a plasticizer [55], an impact modifier [37,57], etc., can represent alternatives of choice for better impact resistance performances. Dynamic Mechanical Analysis (DMA) DMA has been used to provide information about the performances of PLA-AII composites in a broad temperature range (i.e., from −20 • C to 140 • C). Figure 12a,b shows the evolution of storage and loss modulus (E and E", respectively) of neat PLAs and their composites as a function of the temperature. In correlation with AII percentage, E increased distinctly in the low-temperature glassy region for all composite samples (Figure 12a), trends that are like those recorded for the evolution of Young's modulus in tensile tests, highlighting the reinforcing effect of filler. This increase is ascribed to considerable interfacial properties, allowing the stress transfer at low deformations, which finally is expressed in the enhancements of E and Young modulus with filler content [48]. Undeniably, as it is revealed as key example in Table 4, at the temperature of 60 • C (very close to T g ), the loading of filler was responsible for the level of E , apparently without some influence linked to the nature of PLA matrix. E was twofold increased for PLA composites filled with 40% AII compared to the unfilled PLAs. Tg), the loading of filler was responsible for the level of E′, apparently without some influence linked to the nature of PLA matrix. E′ was twofold increased for PLA composites filled with 40% AII compared to the unfilled PLAs. Figure 12b shows the evolution of E″ (loss modulus) as a function of temperature. The results showed that the composites were characterized by slightly higher E", determined by the filler loading, an increase assigned to the contribution of the mechanical loss generated in the interfacial regions [48]. However, the max. peak of E″ ascribed to the Tg zone was only slightly changed, from 66-68 °C for the neat PLAs to max. 70 °C upon AII filling, results that are in good agreement with DSC data. Still, due to the highly filling and increased crystallinity at higher temperatures (i.e., in the range 80-140 °C), PLA3-40% AII composites showed the most important enhancements of both, E' and E". As demonstrated under dynamic solicitation by DMA, meltblending of PLA with AII offers the possibility to produce PLA composites for applications requiring enhanced mechanical rigidity and higher temperature of utilization. It is generally assumed that the fillers can be effective reinforcemnt phases leading to the improvement in specific thermal properties, such as HDT and VST, mostly in semicrystalline polymers (polyethylene (PE), PP, polyamide (PA), etc.). The VSTs of neat PLAs were around 62-63 °C (Figure 13), whereas the addition of 20-40% filler into PLA1 and PLA2 led only to a slight increase in VST to about 65 °C. On the other hand, somewhat unexpectedly, when considering the melt processing conditions (the temperature of the mold was around 70 °C, without any additional annealing process to obtain higher crystallinity), the most remarkable enhancements were obtained for PLA3−AII composites, i.e., an increase in VST to 160 °C, primarily ascribed to the high level of crystallinity. In fact, the IM tests highlighted that the association PLA3 with AII leads to Figure 12b shows the evolution of E" (loss modulus) as a function of temperature. The results showed that the composites were characterized by slightly higher E", determined by the filler loading, an increase assigned to the contribution of the mechanical loss generated in the interfacial regions [48]. However, the max. peak of E" ascribed to the T g zone was only slightly changed, from 66-68 • C for the neat PLAs to max. 70 • C upon AII filling, results that are in good agreement with DSC data. Still, due to the highly filling and increased crystallinity at higher temperatures (i.e., in the range 80-140 • C), PLA3-40% AII composites showed the most important enhancements of both, E' and E". As demonstrated under dynamic solicitation by DMA, melt-blending of PLA with AII offers the possibility to produce PLA composites for applications requiring enhanced mechanical rigidity and higher temperature of utilization. Vicat Softening Temperature (VST) It is generally assumed that the fillers can be effective reinforcemnt phases leading to the improvement in specific thermal properties, such as HDT and VST, mostly in semicrystalline polymers (polyethylene (PE), PP, polyamide (PA), etc.). The VSTs of neat PLAs were around 62-63 • C (Figure 13), whereas the addition of 20-40% filler into PLA1 and PLA2 led only to a slight increase in VST to about 65 • C. On the other hand, somewhat unexpectedly, when considering the melt processing conditions (the temperature of the mold was around 70 • C, without any additional annealing process to obtain higher crystallinity), the most remarkable enhancements were obtained for PLA3−AII composites, i.e., an increase in VST to 160 • C, primarily ascribed to the high level of crystallinity. In fact, the IM tests highlighted that the association PLA3 with AII leads to PLA composites characterized by remarkable crystallization kinetics, that allows the production of items with high DC. Indeed, the comparative DSC analyses performed on IM specimens (Supplementary Material, Figure S5a-c), confirmed the high DC (49-53%) of PLA3−AII composites, results that are in good agreement with those presented in the section thermal characterizations (DSC). Figure S5a-c), confirmed the high DC (49-53% of PLA3−AII composites, results that are in good agreement with those presented in th section thermal characterizations (DSC). Before concluding, it is important to mention that the composites concerned in thi study need further optimization by taking into account the constraints imposed b application (e.g., the amount of filler is a key-parameter), and this it is expected to b realized following their production in higher quantities using for melt-compoundin twin-screw extruders. Moreover, in the frame of forthcoming contributions, it will b important to reconfirm the performances of composites using optimized processin conditions (extrusion, IM, etc.) and also to determine other characteristics of interes (flexural strength, HDT, rheological behaviour and MFI, evolution of molecula parameters as function of the residence time at high temperature and shear, etc. Furthermore, the studies on crystallization mechanisms using alternative techniques o investigation (polarized light microscopy (POM), XRD, etc.), are of further concern. Conclusions The study provides answers to current requests regarding the production of environ mentally friendly materials using PLAs as a matrix (biosourced and biodegradable) an the utilization of mineral fillers such as CS β-AII made from natural gypsum. New tests and specific characterizations were performed to prove the stability of AI as produced by the calcination of gypsum at high temperature (i.e., 500 °C), in compariso to other CS derivatives. Characterized by excellent thermal stability and low absorptio of moisture/water of rehydration, so called "insoluble" AII is stable, maintaining its crys tal structure even after mixing in water as slurry. Moreover, the overall results confirme that CS β-AII is a filler of real interest for the industry of polymer composites. PLAs of different L-lactic acid isomer purity and molecular weights (as supplied fo distinct processing techniques) were used to produce by melt-compounding composite filled with 20-40% AII. The addition of filler leads to composites characterized by en hanced thermal stability and increased rigidity (Young's modulus), determined by th Before concluding, it is important to mention that the composites concerned in this study need further optimization by taking into account the constraints imposed by application (e.g., the amount of filler is a key-parameter), and this it is expected to be realized following their production in higher quantities using for melt-compounding twin-screw extruders. Moreover, in the frame of forthcoming contributions, it will be important to reconfirm the performances of composites using optimized processing conditions (extrusion, IM, etc.) and also to determine other characteristics of interest (flexural strength, HDT, rheological behaviour and MFI, evolution of molecular parameters as function of the residence time at high temperature and shear, etc.). Furthermore, the studies on crystallization mechanisms using alternative techniques of investigation (polarized light microscopy (POM), XRD, etc.), are of further concern. Conclusions The study provides answers to current requests regarding the production of environmentally friendly materials using PLAs as a matrix (biosourced and biodegradable) and the utilization of mineral fillers such as CS β-AII made from natural gypsum. New tests and specific characterizations were performed to prove the stability of AII as produced by the calcination of gypsum at high temperature (i.e., 500 • C), in comparison to other CS derivatives. Characterized by excellent thermal stability and low absorption of moisture/water of rehydration, so called "insoluble" AII is stable, maintaining its crystal structure even after mixing in water as slurry. Moreover, the overall results confirmed that CS β-AII is a filler of real interest for the industry of polymer composites. PLAs of different L-lactic acid isomer purity and molecular weights (as supplied for distinct processing techniques) were used to produce by melt-compounding composites filled with 20-40% AII. The addition of filler leads to composites characterized by enhanced thermal stability and increased rigidity (Young's modulus), determined by the amount of filler. Interestingly, PLA impact resistance was not decreased when up to 20% filler was added, whereas the ultimate strength properties were dependent on the molecular characteristics of PLA matrix, keeping a similar order (PLA1 > PLA2 > PLA3). The values of tensile strength (e.g., 50-56 MPa, for PLA−20% AII composites) are of real interest for engineering applications. Melt-blending PLA with AII leads to two-fold enhancement of storage modulus (at 60 • C) and offers the possibility to use PLA in applications requiring enhanced mechanical rigidity and/or higher temperature of utilization. The good thermomechanical proprieties are ascribed to the fine distribution and dispersion of AII within PLA matrix, and favorable interfacial interactions between components. Advanced kinetics of crystallization linked to the addition of AII were evidenced by DSC for PLA3 of high L-isomer purity (≥99%) and lower molecular weights. Still, following the IM processing, the properties of crystallization remain impressive in the case of PLA3 filled with 20-40% AII (DC of about 50%), whereas a VST of 160 • C was attained on IM specimens (only 63 • C for the neat PLA). By considering the overall performances (tensile strength, stiffness, VST, other specific properties), these composites are proposed for development/production at larger scale, as a quick answer to current requests for the use of PLA-based products in engineering applications. Nevertheless, by the carefully choosing the PLA matrix, these "green" mineral filled (AII) composites can be designed for processing by IM, extrusion, thermoforming, and 3D printing.
2022-06-15T15:17:50.277Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "c5a2e8f0af23b13f6151cda08276ff2bc1cd906b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/12/2360/pdf?version=1655100882", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ece47883735506b966d5c29b8e9e3b8f1dc7569f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
261419116
pes2o/s2orc
v3-fos-license
The association between albumin and C-reactive protein in older adults Albumin had been found to be a marker of inflammation. The purpose of our study was to investigate the relationship between albumin and C-reactive protein (CRP) in 3579 participants aged 60 to 80 years from the National Health and Nutrition Examination Survey (NHANES). In order to evaluate the association between albumin and CRP, We downloaded the analyzed data (2015–2018) from the NHANES in the United States, and the age of study population was limited to 60 to 80 years (n = 4051). After exclusion of subjects with missing albumin (n = 456) and CRP (n = 16) data, 3579 subjects aged 60 to 80 years were reserved for a cross-sectional study. All measures were calculated accounting for NHANES sample weights. We used the weighted χ2 test for categorical variables and the weighted linear regression model for continuous variables to calculate the difference among each group. The subgroup analysis was evaluated through stratified multivariable linear regression models. Fitting smooth curves and generalized additive models were also carried out. We found albumin negatively correlated with CRP after adjusting for other confounders in model 3 (β = −0.37, 95% CI: −0.45, −0.28, P < .0001). After converting albumin from a continuous variable to a categorical variable (quartiles), albumin level was also negatively associated with serum CRP in all groups (P for trend < .001 for each). In the subgroup analysis stratified by gender, race/ethnicity, smoking, high blood pressure, the negative correlation of albumin with CRP was remained. We also found that the level of CRP further decreased in other race (OR: −0.72, 95% CI: −0.96, −0.47 P < .0001) and participants with smoking (OR: −0.61, 95% CI: −0.86, −0.36 P < .0001). Our findings revealed that albumin levels was negatively associated with CRP levels among in USA elderly. Besides, CRP level decreased faster with increasing albumin level in other race and participants with smoking. Considering this association, hypoalbuminemia could provide a potential predictive biomarker for inflammation. Therefore, studying the relationship between albumin and CRP can provide a screening tool for inflammation to guide therapeutic intervention and avoid excessive correction of patients with inflammation. Introduction Continued population aging will cause the population over 60 to double in the next few decades. [1][4] As the population ages, the number of older individuals with inflammation will increase dramatically in the coming decades. [2]Because of the medical costs, morbidity, and mortality associated with inflammation, understanding the markers and risk factors for inflammation is essential for the prevention, early diagnosis, and management of inflammation.C-reactive protein (CRP) levels are known to increase dramatically in response to inflammation and it increases in circulation during inflammatory events. [5,6]Therefore, it is well established that CRP is the marker of inflammation that can be measured. Caloric-proteic malnutrition is also a common disease in the elderly, [7,8] and albumin is considered to be an important indicator of nutritional status. [7,9,10]Meanwhile, albumin is the most widely studied protein for diagnosing malnutrition, and the definition of hypoalbuminemia is used as an indication of malnutrition to screen malnourished people. [11]lbumin plays an important role in a number of physiological mechanisms including the inflammation and nutrition.[14] Significant loss of muscle mass has been observed in elderly people with low albumin levels.Hypoalbuminemia is a mortality prognostic factor in elderly people, whether they live in the community or they are in hospital or The authors have no funding and conflicts of interest to disclose. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. This study was conducted according to the guidelines laid out in the Declaration of Helsinki, and all of the procedures involving human subjects were approved with a full review by National Center for Health Statistics (NCHS) ethics review board approved all NHANES protocols, and participants or their proxies provided informed consent prior to participation. institutionalized.Low levels of albumin are associated to worse recovery following acute pathologies. [15]There is growing evidence that serum albumin is a negative acute phase protein, which supports the view that plasm albumin is a marker of inflammation. [16]ecently, the focus of the research is the relationship between albumin and CRP.However, a controversial finding was reported in the limited evidences.Specifically, the decrease of albumin was accompanied by a significant increase in CRP. [16,17]Nevertheless, other study reported that albumin had been observed to increase in inflammatory states. [18]Therefore, the purpose of our study was to use a representative sample from the National Health and Nutrition Examination Survey (NHANES) to assess the association between albumin and CRP in elderly.This study included a representative sample of a multi-ethnic population, while a large sample size enabled us to conduct a subgroup analysis, which, to our knowledge, was the first study to evaluate the correlation between albumin and CRP in different multivariate logical regression models. Study population In our study, the analyzed data were obtained from the NHANES (2015-2018), NHANES is a representative survey of the national population of the United States, which uses a complex, multi-stage, probability sampling design to provide a large amount of information about the nutrition and health of the general United States population. [19]We combined a total of 19,225 sample sizes representing the 2 cycles of NHANES from 2015 to 2018.The age limit of the study population is between 60 and 80 years old, and after excluding patients with severe diseases such as malignant tumors or trauma, 4051 samples meet the research requirements.After again exclusion of subjects with missing albumin (n = 456) and CRP (n = 16) data, 3579 subjects aged 60 to 80 years were reserved for a cross-sectional study. The National Center for Health Statistics ethics review board approved all NHANES protocols, and participants or their proxies provided informed consent prior to participation. [20]herefore, these data are publicly available and do not require further ethical review. Variables The principal variables of this study were albumin (independent variable) and CRP (dependent variable).The albumin concentration was measured using the DcX800 method which was as a bichromatic digital endpoint method.CRP was measured on the Beckman Coulter UniCel DxC 600 Synchron and the Beckman Coulter UniCel DxC 660i Synchron Access chemistry analyzers.In Discussion The main purpose of our study was to investigate whether albumin was independently related to CRP.In this study, we used a large nationally representative sample of American elders.The multivariate logistic regression analyses indicated an elevated albumin correlated with a lower CRP.Additionally, this decrease further reduced in other race and participants with smoking. In the past few decades, malnutrition characterized by hypoalbuminemia is a common problem in older peoples. [8,21,22]Hypoalbuminemia is not only a sign of malnutrition but also is negative consequences on most organs and systems. [23]Meanwhile, It is significantly negatively correlated with the development of complications, mortality, inflammation and the average length of stay in acute patients. [24,25]dentifying the presence and severity of inflammation is essential for assessing malnutrition.Cytokines, produced during inflammation, often result in anorexia, in large part by impairing the ability to digest or absorb nutrients. [26]Hospitalized patients with severe inflammatory responses (defined as CRP > 100 mg/L) did not demonstrate a strong, measurable response to nutrition support in a large randomized controlled trial. [27]mong our representative United States population, a lower albumin was associated with a greater CRP in old adults.Considering this association, hypoalbuminemia could provide a potential predictive biomarker for inflammation.Therefore, studying the relationship between albumin and CRP can provide a screening tool for inflammation to guide therapeutic intervention and avoid excessive correction of patients with inflammation. Currently, there are limited clinical studies on the relationship between albumin and CRP in the elderly, and some of these studies are controversial.Two cross-sectional studies reported that peoples with hypoalbuminemia have elevated serum CRP levels comparing with normal urinemic patients. [28,29]Ridker et al [30] found that reduced albumin was associated with CRP when increased to values >13 mg/dL.Evans et al [31] also reported that albumin characterize inflammation rather than describe nutrition status or protein-energy malnutrition.Both critical illness and chronic illness are characterized by inflammation and, as such, hepatic reprioritization of protein synthesis occurs, resulting in lower serum concentrations of albumin.Moreover, this paper has been approved by the American Society for Parenteral and Enteral Nutrition Board of Directors.However, partial studies came to the opposite conclusion and demonstrated that the albumin increased under systemic inflammation. [32,33]herefore, we further studied potential relationships and risk factors between albumin and CRP using a large sample. The biggest strength of our study is that this study includes representative samples of the multiracial population and better universality of the United States population.Moreover, our large sample size allows us to perform subgroup analyses and is the first study, to our knowledge, to assess the association between albumin and CRP in different multivariate logistic regression models. There are also some limitations in our study.First, our study is a cross-sectional design which limits the inference of a causal correlation between albumin and CRP among older peoples. Consequently, Further prospective studies with large study samples are needed to clarify the correlation between albumin and C-reactive protein in the elderly.Second, We have not adjusted for other potential confounding factors which may still lead to bias.Third, the age range of participants was 60 to 80, therefore, our conclusion cannot be generalized to the elderly over 80. Conclusion Our findings revealed that albumin levels was negatively associated with CRP levels among in USA elderly.Besides, CRP level decreased faster with increasing albumin level in other race and participants with smoking.Therefore, albumin should be correctly recognized as an inflammatory marker.Studying the relationship between albumin and CRP can provide a screening tool for inflammation to guide therapeutic intervention and avoid excessive correction of patients with inflammation. Table 1 Characteristics of the study population based on serum albumin quartiles., the following covariates were included: age, alanine aminotransferase, blood urea nitrogen, creatinine, Gamma Glutamyl Transferase, body mass index, alkaline phosphatase, aspartate aminotransferase, cholesterol, creatine phosphokinase, globulin, lactate dehydrogenase, triglycerides, uric acid, total bilirubin, race/ethnicity, martial status, smoking, high blood pressure, gender.Details of albumin and CRP measurement process and other covariate acquisition process are available at www.cdc.gov/nchs/nhanes/.All estimates were calculated accounting for NHANES sample weights.The missing values of categorical variables are grouped separately, and the missing values of continuous variables are replaced by the average values.Study participants are divided into quartiles based on albumin levels.All statistical analyses were conducted using the statistical software EmpowerStats (http://www.empowerstats.com) and packages R (http:// www.R-project.org), with statistical significance set at P < .05. Mean + SD, Sfor continuous variables: the P value was calculated by the weighted linear regression model.(%) for categorical variables: the P value was calculated by the weighted chi-square test.ALT = alanine aminotransferase, ALP = alkaline phosphatase, AST = aspartate aminotransferase, BMI = body mass index, CRP = C-reactive protein, CPK = creatine phosphokinase, GGT = gamma glutamyl transferase, LDH = lactate dehydrogenase.addition Table 2 The association between albumin(g/dL) and C-reactive protein(mg/L).
2023-09-01T15:20:46.700Z
2023-08-25T00:00:00.000
{ "year": 2023, "sha1": "136f495a29fe72c6b047497940ee890999e5b030", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000034726", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e69b9543bab7f19227d2ba691c5f2008b91f5f01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244544243
pes2o/s2orc
v3-fos-license
Humidification–dehumidification desalination process using green hydrogen and heat recovery We propose the use of green hydrogen as fuel for a seawater heater in a humidification/dehumidification (HDH) desalination plant to increase its productivity, to allow scaling to large dimensions without negative environmental effects, and to guarantee continuous operation. We develop a mathematical model of the proposed HDH configuration. For operating conditions that guarantee very low NO x production, the fuel consumption is ∼0.03kg of H2 per kg of pure water produced. If the exhaust gases from the seawater heater are used for heat recovery, the GOR of the equipment may increase by up to 39% in relation to the same equipment operating without heat recovery. The operation cost of freshwater is comparable to the costs obtained by other equipment in the literature. If the water produced in the combustion of hydrogen is condensed during the heat recovery process and then added to the freshwater produced, the production cost is reduced by 20%. We found that an excess of air in the air + fuel mix beyond the minimum value appropriate for a low NO x generation does not provide significant benefits. The efficiency of the seawater heater has an impact on the production of pure water, but this impact is strongly mitigated by the heat recovery process. Fuel consumption increases proportionally with the decrease in the effectiveness of the heat recovery device, which is a key parameter for optimal performance. A hydrogen heater is also a good alternative as an auxiliary power source to guarantee continuous operation. In sunny hours a H2 heater may be used to increase productivity preheating the seawater, and at night the system could operate 100% based on H2. Introduction Large-scale desalination plants are based on membrane filtration or thermal distillation. Large amounts of fossil fuel are consumed for the operation of these plants, either directly or indirectly, releasing greenhouse gases and contributing to the global warming of the planet [1]. One possible strategy to mitigate this problem is to use clean energy sources [2] like solar radiation. Solar stills, which have long been used to produce drinking water on a small scale, use solar radiation to evaporate water without boiling, and then it does not produce greenhouse gases. Within a solar still, the circulating air carries humidity, which is further condensed in a cooler region of the equipment, releasing the enthalpy of condensation, which is lost to the environment. The thermal efficiency of solar stills is very limited, not superseding 30%-40% [3] because the energy from the condensation of water cannot be recovered. This is due to the design of the equipment itself: in solar stills, the condensation surface is also the transparent cover through which the solar radiation enters to the system. For this reason, scaling up solar stills to plants with large productions of drinking water is unfeasible. In humidification/dehumidification (HDH) air is usually used as the carrying gas, but in contrast to solar stills, the evaporator, condenser, and water heater are separated parts of the equipment and can be designed to maximize the recovery of the enthalpy of condensation of humid air [1,4]. Due to this reason, theoretical and experimental studies of the desalination process by HDH technology is steadily gaining increasing interest. Up to the present, only small-scale HDH plants, essentially experimental in nature, have been designed, assembled, and tested. Most investigations have focused their attention on the search for configurations that maximize performance. The main disadvantage of the solar HDH systems is their low productivity. The scaling to large production of freshwater maintaining a low environmental impact remains unsolved [5,6]. This is because the scale of solar-powered HDH plants is limited by the surface of the solar collector, which represents near 35% of the total costs of the equipment. Additionally, the surface of a solar collector for a medium-scale HDH plant is prohibitively large. Besides, in regions with freshwater scarcity and limited solar radiance, HDH cannot operate with a solar collector as the energy source. The use of water heaters using fossil fuels would degrade one of the greatest virtues of HDH, impacting negatively on the environment. This problem could be superseded if geothermal energy were accessible, but this is not the case in many regions around the Earth. In these regions, the implementation of HDH as a viable technology to supply freshwater would require the use of an alternative heating source. In [7] it is shown that the water temperature plays an important role in the productivity of the freshwater and that an extra heat source, when available, should be located immediately before or after the solar collector. Additional energy sources are also mandatory to guarantee a continuous operation. Preheating of seawater using hot flue gases was analyzed previously [8][9][10], but considering flue gases from the combustion of fossil fuels. At present, most countries, and especially the European Union, are making great efforts to move to a greenhouse-free economy. To achieve its goal of becoming climate neutral by 2050, the EU plan is to install 40 GW of renewable energy to feed hydrogen electrolyzers in the next decade [11]. For this reason, the development of technologies for the use of hydrogen as fuel for steam boilers has recently taken a strong boost, especially due to the impact that its application to home heating would have [12]. The use of hydrogen as substitute energy for natural gas for steam boilers would allow to save energy costs [13]. On the other hand, the by-products of the complete combustion of hydrogen (neglecting secondary reactions) are pure water in the form of steam [14] and other gases already present in the air if it is used as the oxidant. Therefore burning green hydrogen may be considered a 'clean' source of energy [15]. Green hydrogen generated by electrolysis an the use of fuel cells were proposed along with reverse osmosis desalination plants, to have a backup of power when the demand increases [2,16]. So far, green hydrogen has not been proposed as a power source for thermal desalination technologies. In desalination equipment based on HDH technology, a seawater heater using green hydrogen as fuel would have some remarkable advantages because it has the highest energy content of any common fuel by weight and it is more efficient than many other energy sources, including some renewable alternatives. In this work we present for the first time a water desalination equipment with HDH technology which uses hydrogen to heat seawater. We will propose a possible configuration that takes advantage of the exhaust gases from the water heater and reduces the cost of fresh water production. We will build a mathematical model of the equipment, which includes, for the first time, a thermodynamic analysis of energy recovery of the exhaust gases from a boiler that uses hydrogen as fuel. With this model, we will carry out a study of the operating conditions of the proposed equipment and we will analyse the viability of scaling the HDH technology to a high productivity installation, without causing the generation of nitrogen oxides in the combustion process. Finally, we will analyse some relevant issues in relation to the sustainability of this proposed technology, some of its limitations and questions that would be worth studying in the future. A basic HDH desalination equipment A basic HDH desalination equipment configuration is shown in figure 1. The HDH cycle is as follows: the saline water enters the condenser at a temperature T 1 . The water circulates through the condenser to condensate partially the water vapor from the air leaving the humidifier at the temperature T 9 . The latent heat of condensation of water rises the seawater temperature, leaving the condenser at the temperature T 2 , whereas the air which loses most of its humidity is cooled to the temperature T 6 . It enters the humidifier, repeating a closed cycle. At the outlet of the condenser, the water enters a seawater heater, whose nature does not need to be specified at this moment. The water leaves the heater at the temperature T 4 , to be sprayed into the humidifier for the purpose of bringing it into contact with the air stream which is humidified. In what follows, we will develop a mathematical model of this equipment with a set of mass and energy balance equations. The following assumptions are made: (a) The system is operated at a steady-state. [17,18]) are negligible compared with the water heater power. For the thermophysical properties of seawater we used the correlations provided by [19]. Humidifier According to figure 1, the energy balance of the humidifier is [20] m a h a,6 +ṁ w h w,4 =ṁ a h a,9 +ṁ B h w,5 +Q H,loss , whereQ H,loss is the heat loss to the environment through the humidifier isolation. We will assume that for dry air at T ref,a = 0 • C, the reference value for the enthalpy is H 0 = 0. Therefore, the enthalpy of humid air at temperature T j is written as [21] h a,j = Cp a T j + x(T j )(Cp v T j + λ). The latent heat of evaporation of water λ and the sensible heats of air Cp a , water vapor Cp v , and liquid water Cp w are considered constants. The absolute humidity of the air is obtained in terms of the atmospheric pressure P and the vapor pressure of saturation P v at the dry bulb temperature, using the expression given by [22] (see supplementary material (https://stacks.iop.org/ERIS/1/035005/mmedia)). We also introduced the relative humidity of air H R,j (0 < H R,j 100%) because the air may not be saturated at points j = 9 although at point j = 6 the air is probably saturated, because it just came from a condensation process [23]. Therefore For most HDH equipment P is the atmospheric pressure, and we will assume this working condition for our installation. To complete the humidifier model we will use its effectiveness. The effectiveness is the actual change of enthalpy rate produced in relation to the maximum change in total enthalpy rate that can be achieved [24], and it is computed as whereḢ =ṁh (water or air as appropriate) and we have used sub-indices i for inlet values, o for outlet values, a for air and w for water. In this expression H ideal wo is the enthalpy of water at the dry air inlet temperature, and H ideal ao is the enthalpy of saturated air at the water inlet temperature. The mass balance in the humidifier is expressed as whereṁ B is the mass flow rate of brine leaving the humidifier at point (e). The mass flow of airṁ a is constant. Condenser As the air stream flows in a closed loop, the balance of enthalpy of the air in the condenser is the same as for the humidifier. We then have [21] m w h w,1 +ṁ a h a,9 =ṁ w h w,2 +ṁ a,6 +ṁ pw h w,6 +Q C,loss . We have assumed that the distillate water leaves the condenser at the same temperature of the air stream T 6 . As for the humidifier, we also define the condenser effectiveness as The mass balance in de condenser isṁ The mass flows of airṁ a and sea waterṁ w are constant. Sea water heater The energy balance in the sea water heater is given bẏ whereQ; is the rate of effective heat absorbed by the feed water and it is known [21]. In this case we do not consider heat exchange with the environment. With this last expression we completed the thermodynamical model of the HDH equipment. The temperature of the feed water, the mass flow rates of water and the mass flow rate of air are considered known quantities. There are five unknown temperatures: T 2 , T 4 , T 5 , T 6 , and T 9 , and five equations: (1), (4), (6), (7) and (9). The pure water mass flow rate and the brine mass outflow rate can be obtained as a function of the temperatures and relative humidities by (3) and (5). This equations have been extensively used to model basic HDH equipments. Nevertheless, we have validated the model using the experimental data furnished by [21], and details of the procedure and results can be found in the supplementary material. The experimental results of [21] could be adequately reproduced, and we concluded that the thermodynamical model works properly. Using H 2 in a HDH desalination plant A heat source that increases the temperature of the water in the humidifier would allow to increase the productivity of the equipment [7]. It would also allow a scaling up to higher productions. As discussed before, the use of fossil fuels is not an alternative given the negative environmental impact. In this section we will analyse the use of hydrogen as an energy source to heat the seawater. A thermodynamical model of the H 2 sea water heater is developed in the next subsections. H 2 + air seawater heater The development of H 2 boilers is just now taking a strong impulse, but up to the present, a thermodynamic analysis of these devices, nor data of their operation have been published. For this reason we warn readers that it is not possible to compare all our results with published or experimental data. Temperature of the flame The by-products and the temperature of combustion in the burner of the seawater heater depend on the fuel and oxidant used. In this study, we will consider H 2 as the fuel, and air as the oxidant. As this is the first analysis of the use of H 2 as fuel in an HDH plant, we will introduce some simplifying assumptions: • (i) The air humidity in the H 2 − air mix is negligible. • (ii) We do not consider other constituent gases of air beyond oxygen and nitrogen. • (iii) The mix is made at the standard temperature (T ⊕ = 25 • C). • (iv) The combustion is complete. (10), with Lc = 118.680 × 10 6 J kg −1 and dry air at the standard temperature (see main text). The temperature computed by [26] is also shown. • (v) Molecular dissociations of flue gases are no considered. Taking into account these factors would produce minor changes in the results. In these conditions the temperature of combustion T COMB is obtained by solving the equation [25] Lc = where Lc = 118.680 × 10 6 J kg −1 is the lower heating value of H 2 , and is the excess of air mass flow rate in the mixture respect to the air needed for a stoichiometric mix. Appropriate correlations for the specific heat capacities where used, and they can be found in the supplementary material. In figure 2 we show the temperature of combustion of H 2 given by (10) as a function of the excess of air in the H 2 − air mixture. It is important to note that the flame temperature should remain below ∼1350 • C in order to inhibit, or maintain very low, the production of NO x gases [27]. Nitrogen oxides have negative consequences for human health, cause acid rain, and also contribute to the global warming. Therefore, φ > 1.2-1.3 guarantees low NO x production [28] and we will test the equipment under this restriction. Exhaust gases The complete combustion of a given mass rateṁ H 2 of hydrogen using air as the oxidant will produce of exhaust gases (H 2 O in the form of steam). The seawater heater efficiency B is defined as the fraction of the total heat produced that is absorbed by the water to raise its temperature to the desired value [29]. In this paper we will use the expression Typical water heater efficiencies range between 0.8 and 0.9 [29]. The exhaust gases are the main source of heat losses. The temperature of the exhaust gases may be computed by means of the empirical expression [30][31][32] which considers 2% of heat lost to the environment by radiation and convection [32]. In this paper, we will not take into account losses due to incomplete combustion, production of ashes and slag. We want to remark that this expression was obtained for gas fuels other than H 2 . Changing a heater from fuel gas to H 2 fuel results in a mass flow reduction through the heater that could impact the convective heat transfer to the water and thus the stack temperature, with the consequent variation of the exponent in (14). Very few information is about this question (there is not an expression like (14) for H 2 in the literature), but it seems that stack temperature does not change substantially when switching from natural gas to H 2 [33]. The heater efficiency should also change very little [33]. The minimum acceptable exhaust gas temperature for heaters and boilers is usually 120 to 130 • C in order to avoid condensation. In our investigation, we will impose the condition T 7 > 100 • C. This is a rather low value in relation to the standards considered for water heater operations but nevertheless ensures us that we cover all possible conditions of operation without losing generality and remaining above the dew point temperature of water vapor at atmospheric pressure. Within the imposed conditions, the working region of the seawater heater is shown in figure 3. There is a maximum efficiency for the seawater heater of B ∼ 93% because above this maximum efficiency the conditions we have imposed for the flame temperature and for the temperature of the exhaust gases cannot be fulfilled simultaneously: according to (14), if the flame temperature is T COMB < 1350 • C, the condition to have T 7 100 • C is The sea water heater efficiency also depends on the excess of air used in the mix, but this effect was not considered in this study. Effect of the water temperature in the pure water flow rate We are now in a position to calculate the effect of increasing the water temperature on the production of pure water. As pointed out by [7], the rate of pure water production increases very significantly with an increase in the humidifier water inlet temperature (it is expected because of the exponential increase in the absolute humidity of the saturated air with temperature). Pure water mass flow rate vs the temperature of water at the humidifier inlet is shown in figure 4 where we used the parameters of the equipment ( C , h ,Q C,loss ,Q H,loss and H R,9 ) already computed in the supplementary material section but varying T 4 . As it was already shown by [7], the relation between T 4 andṁ pw is basically linear. In our case the growth rate on pure water production is of 0.057 kg h −1 for each degree that the water temperature increases. For this reason, to increase the productivity of HDH technology, an additional energy source to solar energy is necessary. Heat recovery The use of fuel, either liquid or gaseous, to heat the seawater will generate heat losses through the exhaust gases, which on the other hand, are the most important losses in the process. It is essential to add some device to recover part of it. The alternative configuration we will consider to evaluate the use of H 2 in HDH desalination with heat recovery is shown in figure 5. The configuration of the desalination cycle is the same as before, corresponding to an open cycle for the water flow and a closed cycle for the air stream. Now, the water leaves the condenser at the temperature T 3 , whereas the exhaust gases that leave the seawater heater at the temperature T 7 exit the heat exchanger that is used as a recuperator at the temperature T 8 . The thermodynamical model for the humidifier and the condenser is exactly the same as before (equations (1), (4), (6) and (7)), and they will not be shown here again. Let us consider that the exhaust gases that come out of the stack of the seawater heater are composed of water vapor and N non-condensable gases (in this case N = 2, the excess of air used and nitrogen). We will assume that (a) The flue gases are at a temperature high enough to guarantee that the water does not condense before entering to the recuperator. Condition (b) is fulfilled if the exit temperature of the flue gases is below the dew point. The dew point can be computed using Dalton's law for the vapor partial pressure [34]. Full details are given in the supplementary material, but it is below 100 • C. Under these conditions the enthalpy balance of the recuperator is given bẏ where we used i for inlet values, o for outlet values, and g for the exhaust gases. The mass flow rate of each species (water vapor, nitrogen and oxygen) is given by (12). To complete the recuperator thermodynamical model, its effectiveness is defined in the same way as for the humidifier and the condenser [24] as In this expression, H ideal wo is the enthalpy of water at the exhaust gases inlet temperature, and H i deal go is the enthalpy of the exhaust gases (including condensed water) at the water inlet temperature. Method In the previous sections, we have shown the set of equations governing the steady-state operation of the HDH equipment. To solve the model we used the following strategy: the whole HDH system is separated into two sub-systems: (a) A basic HDH subsystem composed of the humidifier and the condenser. (b) A second subsystem composed of the seawater heater and the recuperator. The second subsystem provides a mass flow rate of waterṁ w to the humidifier at the temperature T 4 . We adopt as the reference for the working parameters of the basic HDH subsystem those provided by the 8 valid experimental runs of [21]. Therefore, the quantities T 2 , T 4 ,ṁ w andṁ pw are assumed to be known. Equations (10), (13), (14), (16) and (17) form the governing equations to be solved in order to obtain T 3 , T 7 , T 8 , the mass flow rate of H 2 consumed in the seawater heater, and the mass flow rate of each species of the exhaust gases. (12) is also used to compute the mass of each species of the exhaust gases. In addition, we leave as free parameters the excess of air φ, the efficiency of the heater B and the effectiveness of the recuperator Rec . As it was already mentioned, we restrict the domain of excess of air to values that guarantee that the temperature of the exhaust gases exceeds 100 • C, and on the other hand to values of φ 1.3, so that the production of NO x gases is inhibited or very low. For the recuperator effectiveness we explored values in the range 0.7-0.9 [24,35], and for the seawater heater from 0.80 to 0.92 [29]. The mathematical model is solved in the following way: (a) For a given value of φ, T COMB is computed by means of (10). (b) With a given value of B , T 7 is computed from (14). (c) With this value of T 7 and a given value of Rec , we compute T 3 , T 8 andṁ H 2 (and therefore the mass of all the exhaust gases thorough (12)) by means of an iterative algorithm applied to the non linear system formed by (13), (16) and (17). We stopped the iteration when the corrections to the unknowns is less than 10 −6 in absolute value. Performance parameters of the system We will use gained output ratio (GOR) and operating cost parameter (OCP) parameters to assess the performance of the system. They are presented in the following subsections. Gained output ratio To characterize the operation of a thermal desalination equipment, the GOR is widely used. It is computed as the ratio of the latent heat of evaporation of the pure water produced, to the heat used to heat the seawater. It represents the amount of heat that could be recovered during the desalination cycle. According to figure 5, In what follows, we will define also GOR 0 as the gained output ratio obtained in the basic equipment, without heat recovery. In this case the water heater inlet temperature is T 2 (see figure 1) so As an example, for the second experimental run of [21], whose measured values are T 2 = 40. Operating cost parameter OCP gives the cost of the energy consumed to produce 1 kg of freshwater, and it is given by COST(H 2 ) is the unit cost of H 2 , and depends on the technology used for its production. One of the most well developed technology for the production of green hydrogen is electrolysis, where electricity produced by renewable sources is used to split water into hydrogen and oxygen. According to recent evaluations [36], green hydrogen is produced at a cost of 4-5 u$s kg −1 , and considerable effort is dedicated to reduce this cost to 1 u$s kg −1 by 2050. We will use these values in our analysis. Although this indicator considers only the cost of energy to evaluate the cost of water (in this case the cost of the hydrogen used to heat the seawater) we must take into account that it is the most relevant factor in water desalination cost structure. A comprehensive analysis, leading to a more accurate indicator, such as the levelized cost of water, demands the inclusion of the cost of the facilities, their maintenance, operation, amortization of capital, etc. The price of the hydrogen that we are using already considers all these factors, and the cost of the facilities of a HDH plant like the one proposed here would only be possible by making an adequate dimensioning of the equipment, which is beyond the scope of this work. Nevertheless, as an example, specific values of capital cost of an experimental HDH plant are given in [37]. The equipment, maintenance and operation costs of the produced water is about 10% the contribution from the energy cost. In this sense, OCP can be taken as a good indicator of the cost of the fresh water produced. Results In figure 6 we show the rate of H 2 consumption as a function of the excess of air for the operating conditions of the 8 runs reported in [21]. Although the maximum relative variation in pure water production among all 8 runs is ∼43%, the hydrogen consumption varies only by ∼11%. We see that run 2 and run 8 represent extreme cases. Considering the reported values in [21] (see supplementary material), we see that the heat needed to raise the water temperature from T 2 to T 4 is the maximum and minimum respectively for runs 2 and 8. This is shown in figure 7. The hydrogen consumption of the complete equipment reflects the behaviour shown if figure 7. In figure 8 we show the improvement of the GOR when heat recovery is applied. The GOR increases by about 31%-32% with respect to the GOR 0 of the basic equipment. It is worth noting that GOR and fuel consumption are directly linked. Using (13) and (18) we have In order to analyze the sensitivity to different parameters, we will adopt from here on as reference, run 2 from [21]. The fuel consumption with and without heat recovery is shown in figure 9. Heat recovery has a deep impact on fuel consumption. The difference is more pronounced for high pure water production, reaching ∼24% of fuel savings whenṁ pw ≈ 3 kg h −1 . Regarding the OCP, we have performed two analysis. The OCP vs the efficiency of the seawater heater is shown in figure 10 for the conditions of run 2 [21]. We show the OCP for the present cost of green H 2 and for the cost expected by 2050. We also show for comparison, the OCP of pure water production found by [38]. We observe that heat recovery guarantees that OCP of the proposed equipment falls within the OCP values found by [38]. The mass flow rate of vapor present in the exhaust gases is the same volume of the ultra pure water used in the electrolyzer to produce the amount of hydrogen (without considering losses) burned in the seawater heater. It is 8.995 ×Ḣ 2 , and according to figure 9, to produce 1 kg of pure water we need ∼0.026 kg of H 2 . Therefore, for each kg of pure water produced in the HDH cycle, we have also 0.026 × 8.995 kg of water condensed in the recuperator. Once condensed, this This means that the cost of water is reduced by about 20% if this condensed vapor is added to the fresh water produced in the HDH cycle. If instead this water is returned to the electrolyzer, we can save part of the energy used to produce the ultra-pure water decomposed in the electrolyzer to generate the consumed H 2 . OCP values may be even lower if we consider the fuel savings that occurs when the water temperature rises. This is shown in figure 11. OCP may be up to 20% lower than the one found by [38]. A 100% efficient electrolyzer could produce 0.03 kg H 2 /kWh. Today, depending on the technology used, an electrolyzer has an efficiency of 80%-90%. This means a specific energy consumption of 0.024-0.027 kgH 2 /kWh. With our results, we found an equivalent mechanical energy consumed in the desalination process of 620-757 kWh m −3 of fresh water produced. In addition to working on reducing the cost of hydrogen production, much effort is being made to increase the evaporation in the humidifier. Although we have not considered any specific strategy to do this in the proposed equipment, it would be interesting to mention some of them here: • Decrease the working pressure inside the humidifier [24]. At atmospheric pressure, saturated air at 75 • C carries 0.276 kg kg −1 of water vapor, but at half the atmospheric pressure, it becomes saturated with 1 kg kg −1 , almost quadrupling productivity and dividing by four the specific energy consumption. • Multiple extractions of humid air from the dehumidifier and injection to the dehumidifier [39] could improve several times the GOR and the heat recovery of the HDH cycle. The equipments reported in the literature that do not exclusively use solar energy, make use of electrical energy or waste heat from some heater device to heat the water. In table 1 we compare our results with a series of results reported in the literature. In table 1 it can be see that the cost of the water produced by our equipment is not very different from those found by other authors. Sensitivity to B and Rec In this subsection, we will explore the sensitivity of the results to variations of the efficiency of the seawater heater B and of the recuperator Rec . For this analysis, we adopt run 2 from [21] as the reference. In figure 12 we show the variations of the fuel consumption for several values of B . All these values correspond to a unique value of the recuperator effectiveness Rec = 0.8. As expected, fuel consumption drops with increasing the seawater heater efficiency. However, an increase of 18% in efficiency only impacts 4.5% in fuel saving. The influence of the recuperator effectiveness in the fuel consumption is shown in figure 13. Fuel consumption increases in direct proportion to the decrease in the efficiency of the recuperator. Conclusions Since desalination is energy intensive, the use of fossil fuels in thermal desalination plants is not a suitable approach. However, desalination driven by solar energy is only possible for small scale production. Increasing the productivity of HDH systems requires the supply of additional energy sources. Considering the perspective of environmental sustainability, which is an important issue associated with any desalination processes, we have analysed in this paper the use of a seawater heater burning hydrogen in a HDH desalination equipment. Among the fuels, hydrogen has the largest calorific value (almost three times that of natural gas). In this sense, our proposal provides an option for increasing the productivity of a HDH desalination plant, either by increasing the temperature of the water if the plant is powered by solar energy, or by serving as the sole source of energy when solar energy is unavailable. We also have developed a thermodynamical model that can be used in the future to evaluate different modifications of the HDH cycle, in order to search for configurations with optimum performance. Our equipment, which is a basic configuration, requires 0.025-0.03 kg of H − 2 per kg of pure water produced, but this number could be reduced significantly if appropriate techniques are used to increase the rate of evaporation of water in the humidifier, such as mechanical air decompression and/or multiple extractions of vapor. The production costs of pure water are equivalent to those of other HDH equipment reported in the literature. Costs will fall as hydrogen achieves an economy of global scale, which will happen in the next decades. We found that the use of excess of air in the air + fuel mix beyond the minimum value appropriate for low generation of NO x (φ ∼ 1.3) does not provide significant benefits in hydrogen consumption regardless of the efficiency values Rec and B (figures 12 and 13). The efficiency of the seawater heater has an impact on the production of pure water, but this is strongly mitigated by the heat recovery process. On the other hand, fuel consumption increases proportionally with the decrease in the effectiveness of the recuperator which is then a key parameter for optimal performance. Hydrogen is an energy vector carrying water. Burning hydrogen in the HDH process has the advantage that a large fraction of the water that was destroyed in the electrolyzer to produce it is recovered during the desalination cycle. It is especially significant, because desalination is needed in areas where fresh water is limited, and hydrogen production demands large amounts of ultra pure water. If part of this amount of water is recovered during the desalination process, the environmental impact of hydrogen production is reduced. Beyond the cost, we must not overlook some of the limitations that this technology still has to overcome. The lack of hydrogen pipe networks is one of them. This means that for the moment the hydrogen must be transported or produced where this type of plants are located. Another limitation is the lack of a well developed hydrogen burner technology. This issue is being addressed just now, and it is envisaged that a number of commercial options will be available in the next years. It is worth to mention that the HDH technology, according to the life cycle analysis carried out by [52], is the one with the lowest environmental impact, which is up to ∼84% less than that of a reverse osmosis plant with similar characteristics. HDH is a technology of low maintenance and extended life time [53], which also reduces its environmental impact. One of the issues that we have taken into consideration in the design of the HDH cycle, is the emission of greenhouse gases. Hydrogen burning does not produce CO 2 being water, nitrogen and air (if a lean mix is used in the burner) the only constituents of the flue gases. Nevertheless, recombination of nitrogen and oxygen during high temperature burning can cause undesirable production of NO x . This situation can be adequately controlled by the use of the excess of air in the air-hydrogen mix. We have found that this excess has not a remarkable influence in the equipment performance (see figure 6). Brine management is another issue of greatest concern in relation to the sustainable management of desalination processes. The large volume of brine produced has economic and environmental implications, especially when it is discharged into sensitive ecosystems. Humidification-dehumidification technology can work with very high salinity [54] allowing recirculation of brine in the dehumidifier. This recirculation reduces the rejected volume of brine. In addition, heat can be recovered from the brine [52] increasing the energetic efficiency and lowering its temperature, which is also another serious environmental problem of thermal desalination processes [55]. Another problem shared by all desalination technologies is the need to add chemicals to the feed water, that are usually washed into the sea after desalination. In thermal processes the use of anti-scalant agents is necessary to prevent the formation of scales within heat exchangers, humidification towers and water heaters. Anti scalants are not necessary if the air is heated instead of the water [56]. Scale formation and corrosion are drastically reduced and so the maintenance costs decrease substantially. Several questions remains to be investigated. One of them is the sizing of the equipment, which is important for several reasons. On one hand it allows a more accurate evaluation of the cost of water. On the other hand after sizing, we could perform a life cycle analysis of the technology to asses the environmental impact of the desalination process we are proposing. The investigation of other cycles, such as air heating as proposed by [24], is also of relevance and will be addressed in the future. Another question to be investigated is the effect of re-circulation of brine and heat recovery from it, because this would minimize the volume of brine, the cost of operation and the environmental impact.
2021-11-25T16:15:12.560Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "c3d4a101bc6b2a07d0d30e78c856fb87f688e17c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2634-4505/ac3ca0", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "40293b254422e2ab6dcfee6b36a27206f133b7cb", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
230724648
pes2o/s2orc
v3-fos-license
Multidisciplinary Intestinal Rehabilitation in Children: Results from a Korean Intestinal Rehabilitation Team Purpose: Intense multidisciplinary team effort is required for the intestinal rehabilitation of pediatric patients afflicted with intestinal failure (IF). These include enteral and parenteral nutrition (PN) support, monitoring of complications related to treatment, and considering further medical or surgical options for intestinal adaptation. Methods: In the intestinal rehabilitation team (IRT) at our center, we have experienced 25 cases of pediatric IF requiring multidisciplinary intestinal rehabilitation. This study is a retrospective review of the collected medical records. Results: Of the 25 subjects treated, 18 were boys and 7 were girls. At the time of referral to the IRT, the mean age was 1.6 years. Median follow-up was 42.9 months. The causes of IF were short bowel syndrome in 18 cases and motility-related in 7 cases. There are 24 patients alive at last follow-up: 12 patients have been weaned off PN, whereas 12 are still dependent on PN. Median time to weaning off PN was 4.8 months. There were 2 cases of IF-associated liver disease. Fifteen cases of central line associated blood stream infections occurred in 9 patients (0.82/1,000 PN days). Conclusion: We report the results of multidisciplinary intestinal rehabilitation of pediatric IF patients in a Korean IRT. Further studies are required to improve survival and enteral tolerance of these patients. INTRODUCTION Pediatric intestinal failure (IF) is a condition in which the function of the intestine is insufficient to sustain adequate growth and development [1]. With reduced intestinal function to absorb nutrients, fluids and electrolytes, patients require long-term supplementation with parenteral nutrition (PN). In children, IF is usually caused by extensive resection of the small intestine due to a variety of causes, including congenital malformations, thrombosis of the mesenteric vessels, or Crohn's disease [2,3]. Previous studies have reported mortality rates of 15% to 47%, depending on the age, underlying disease, and duration of venous nutrition, and the economic and social burden due to longterm venous nutritional complications [4]. The goal of this study is to report the process and outcome of multidisciplinary intestinal rehabilitation of pediatric IF patients in a Korean intestinal rehabilitation team (IRT). METHODS Retrospective review of medical records of all patients who were managed by our IRT from October 2014 to June 2019 was done. Patients younger than 18 years of age were selected and clinical data including age, gender, etiology, complications, and outcome of intestinal rehabilitation were collected. All patients received management by a multidisciplinary IRT, as previously described [5,6]. Briefly, the IRT consists of a pediatric surgeon, pharmacist, clinical dietitian, inpatient and home nursing staff focused on the care of pediatric patients with IF. Upon presentation to the IRT, a thorough work-up of the patient was performed to assess the nutrition status, length and anatomy of remnant bowel, and complications. Patients were prescribed a combination of PN and enteral nutrition (including oral intake) based upon their enteral tolerance. Enteral nutrition was encouraged as much as possible. Patients were monitored daily with vital signs, urine output, stool output, and body weight measurements. Blood was drawn weekly to analyze complete blood counts, electrolytes, liver enzymes, bilirubin, creatinine, albumin, and C-reactive protein. Trace elements and vitamins levels were checked monthly to bimonthly. Patients were encouraged to be discharged as early as possible and receive home PN. Home PN was applied to patients that satisfied our center's criteria for home PN ( Table 1). IF-associated liver disease (IFALD) was diagnosed when 2 consecutive measurements taken more than 1 week apart showed direct bilirubin >2.0 mg/dL and the patient did not have other apparent causes of cholestatic liver dysfunction including viral hepatitis, metabolic liver disease, structural anomalies of the hepatobiliary system, ongoing infection or sepsis, and prolonged use of hepatotoxic drugs. All patients' PN was administered via a tunneled single lumen central venous catheter (4.2 Fr Broviac catheter; BARD, Covington, GA, USA) which was preferably inserted into the right internal jugular vein. Central line-associated blood stream infection (CLABSI) was diagnosed according to the surveillance definition by the Centers for Disease Control and Prevention/National Healthcare Safety Network [7]. RESULTS Twenty-five patients younger than 18 years were managed during this period and were included in the analysis. There were 18 boys and 7 girls. Median age at presentation to the IRT Cause of IF was short bowel syndrome in 18 patients ( Table 2). Underlying pathology to short bowel were necrotizing enterocolitis in 8 cases, midgut volvulus in 4 cases, intestinal atresia in 4 cases, and Hirschsprung's disease in 2 cases. Seven patients with non-short bowel syndrome IF included chronic intestinal pseudo-obstruction (CIPO, 4 cases), megacystis microcolon intestinal hypoperistalsis syndrome (1 case), radiation enteritis (1 case), and autoimmune leiomyositis of small bowel (1 case). Twenty-four patients are alive at last follow-up ( Table 3) Fifteen cases of CLABSI were diagnosed in 9 patients. Incidence of CLABSI was 0.82/1,000 PN days. Serial transverse enteroplasty procedures were done in 3 patients 5.2 months, 50.8 months, and 52.3 months following each patients' initial surgical procedures. There were no patients receiving intestinal transplants during the study period. DISCUSSION Management of patients with IF has undergone drastic changes in recent decades with widespread implementation of multidisciplinary intestinal rehabilitation programs in dedicated centers around the world [8,9]. The application of multidisciplinary team approach in pediatric intestinal rehabilitation has allowed IF patients to be provided with timely and successful integration of medical, surgical, and nutritional care. In turn, this has led to positive outcomes in terms of survival, sepsis events, and IFALD [10][11][12]. We report the outcome of multidisciplinary intestinal rehabilitation in pediatric patients since the initiation of an IRT in our institute. This is, to the best of our knowledge, the first report of multidisciplinary intestinal rehabilitation in pediatric patients from a Korean IRT. This study does not provide an analysis of improved outcomes according to the application of multidisciplinary IRT care because we have been employing multidisciplinary team approach since the beginning of our IRT. However, our current outcome results are comparable to those reported from North American centers that actively promote multidisciplinary intestinal rehabilitation programs. It is well known that central venous catheter-related complications are important factors affecting outcome of pediatric IF patients [13]. In a report by Merras-Salmio et al. [14] the incidence of CLABSI was 1.01/1,000 PN days. The authors ascribed their good outcomes to the use of commercially manufactured 3-in-1 PN bags and taurolidine central venous catheter locks for CLABSI prevention. We have not utilized taurolidine locks in our program but have been routinely using commercial 3-in-1 PN bags in all of our patients on home PN. We also believe that this practice has contributed to the relatively low incidence of CLABSI in our pediatric IF patients. Although fractures of central venous catheters are not well described in the literature, we have encountered several cases of catheter fractures that require surgical revision. It has proven to be a significant complication requiring more attention when managing IF patients. We have also experienced one severe case of central venous catheter-related thrombosis that extended from the left internal jugular vein to the superior vena cava. This 7-year-old boy required thrombectomy via open thoracotomy and temporary placement of a central venous catheter in the inferior vena cava. Liver dysfunction is a major concern in pediatric IF patients in need of long term PN support. It is believed that lipids, specifically lipids from soy are one of the main causes of liver dysfunction in IF patients manifesting as progressive cholestasis. We did not actively perform liver biopsies in any of our patients and employed the criteria of direct bilirubin > 2.0 mg/dL for diagnosis of IFALD. We experienced 2 cases of IFALD among our cohort of patients and both were successfully managed by the strategy of fish oil monotherapy, as previously published describing our group's earlier experience [5]. This novel approach of eliminating soybean oil and exclusively providing fish oil was successfully applied to the 2 children with CIPO. In conclusion, pediatric IF is a heavy burden for both patients and their medical staff. However, few centers have sufficient experience or dedicated personnel to provide necessary care for these children. Particularly important factors are IFALD and catheter-related complications, which may lead to patients requiring invasive procedures such as intestinal transplant. Team-based multidisciplinary approach and treatment protocols is important in improving outcomes in terms of enteral autonomy, survival, and quality of life.
2020-12-31T09:05:55.858Z
2020-01-23T00:00:00.000
{ "year": 2020, "sha1": "3ec9aff3fce3a710fc8011c5fcc1999dbc5258fc", "oa_license": "CCBYNC", "oa_url": "http://aps-journal.org/Synapse/Data/PDFData/2053APS/aps-26-61.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a01250edd3533f7217b9879b5f7525836c4d181a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213982025
pes2o/s2orc
v3-fos-license
Research of textural and structural features of refractory gold-bearing ores The article studies the material and mineralogical composition of the refractory gold-bearing ores of the weathering crust of one of the deposits in Eastern Siberia. It is revealed that, according to the textural and structural features, ores are multi-mineral and have significant differences in the composition of the detrital, granular material and the loose component. All ore is saturated with iron hydroxides and limonite crusts with clay components. The results of the analysis showed that large classes contain an insignificant amount of a valuable component. Increased gold content is noted starting from -4 + 0 mm size. In the free state are about 3-5%. Processing of these types of raw materials is a rather complicated task. Analysis of the enrichment methods showed that the use of new solvents of noble metals with their subsequent concentration is promising. Of practical interest are alkaline solutions obtained by dissolving sulfur in an aqueous suspension of calcium hydroxide. The reagent is non-toxic and environmentally friendly. Leaching was carried out in a wide range of concentrations of the solvent used. The optimum solvent concentrations and the duration of the leaching process for extracting the valuable component into the filtrate were determined to be 97–98%. Chemical analysis of cakes showed that the reagent does not enter into chemical interaction with other elements that make up the ore. As well as the content of elemental sulfur in dump cakes averages 0.64%, that is, it corresponds to a content in the raw material of up to 0.8%. At the same time, cakes are not environmentally harmful and can be stored on special platforms. Introduction Simple by the structure and the most available deposits of gold are substantially worked out. Into operation are being put difficult and refractory placers, characterized with a high clay content in sand, fine particles of gold, conglomeration and ductility of productive strata. Such deposits, particularly of loose type contain no less than 45% from the general amount of known reserves of Central and Eastern Siberia and their amount under exploration will increase over time. At the same time, the main source of gold production remains the primary gold-bearing ores, which are divided into oxidized and sulphide ores. For these types of ores, independent technological processing processes are formed, in particular, oxidized ores are processed by gold leaching with cyanic aqueous media, followed by sorption by sorbents. Oxidized quartz and sulphide ores are enriched by gravity and flotation methods with the production of appropriate concentrates suitable for further processing in the metallurgical process. Recently, due to the depletion of rich and easily-rich gold-bearing ores, there have been trends in the development of technological schemes for the processing of mineral raw materials and the commissioning of new deposits with refractory ores with a finely dispersed phase and a low content of valuable components, as well as a high content of clay components. Listed reasons and a high content of clay components require a higher quality of textural and structural researches of source ores. The purpose of these studies is the research of textural and structural features, the development of technology for processing these types of ores and its introduction into operation in the existing areas of the processing plants. In this regard, the study presents the ore from one of the fields in Eastern Siberia. The analysis of methods for enrichment of these types of ores revealed the advantages of using the most promising effective solvents of noble metals with their subsequent leaching [1], [2]. Research materials and methods Gold ores presented for the study are colored brownish-gray in color and consist of fragments of rocks and minerals (~ 70%) and a sandy-clay component (~ 30%). The structure of the ore is determined by the size of detrital grains -from coarse-grained (debris size from 2 to 85-100 mm) to pelitic (<0.01 mm), there are also fragments of psamlita and aleuritic structures (with fragments from 1-2 mm to 0.01 mm). As a result of selective weathering from the surface, debris is predominantly weathered with an abundance of cells, grooves, pores filled with clay. All ore is impregnated with iron hydroxides, especially the clay part. Clay with a particle size of 0.01-0.001 mm was analyzed by thermal method. The analysis of the thermogram shows that the bulk of the clays according to the composition is kaolin-hydromica with the presence of 0.5-1% impurity of quartz and calcite, as well as iron hydroxides. Scintillation analysis of clay shows that the maximum amount of gold is at the first level of discrimination, i.e. gold size ranges from 3 to 15 mm. Spectral analysis of impurity elements indicates their insignificant content. From the jointly encountered impurities, their content does not exceed: MnO-0.09%, Mg-1.14%. Gold content averages up to 3 g / t. The granulometric analysis of the source ores showed that the content of coarse material, represented by a set of different rocks, is 25% of the class -60 + 4 mm, with an average gold content of 0.009 g / t. In the granular part of the 40% grade -4 + 0.074 mm with an average gold grade of 0.2 g / t. The highest content of clay components is up to 35%, the gold content is on average up to 2.8 g / t. Result and discussion The results of the sieve analysis show that large classes contain an insignificant amount of a valuable component in the granular material, starting with a grain size of 4 mm, the gold content increases. Increased gold content is noted in the grade -0.074 + 0 mm [3], [4]. On the basis of the obtained results of material, particle-size and chemical analyzes of gold-bearing sands, it was established that gold is closely associated with all the minerals represented by the fine phase. It is covered with a film of iron hydroxides, limonite crusts, and also cemented by limonite and clay components. About 3-5% of gold is in the free state, in the finely dispersed state it is mainly concentrated in small classes -0.074 + 0 mm. Extraction of gold from these types of raw materials is a The analysis of the enrichment of these types of gold-bearing raw materials showed that the most promising is the use of effective solvents of noble metals, followed by their concentration. In world practice, cyanide compounds are used to isolate noble metals. But there are clay refractory ores that are practically not amenable to cyanidation [5], [6]. Alternative cyanide compounds reagents, well proven in the extraction of gold, are used only on a pilot scale. The main advantages of cyanide compounds over other gold solvents are high selectivity with respect to noble metals, low consumption of reagents, high recovery of gold into the solution and its subsequent isolation from cyanide solutions, low corrosivity of the medium. With undoubted advantages, the cyanation process is characterized by significant drawbacks. The main technological disadvantage of the cyanide process is its high leaching time. From the point of view of ecology, the disadvantages include the extremely high toxicity of cyanides of alkali metals, belonging to substances of the first class of hazard, and products of their interaction with ores. For a number of gold-producing regions, the high costs of environmental measures make the development of promising deposits unpromising. The problem of decontamination of wastewater processing plants is not fully resolved. At present, a sufficiently wide range of solvents has been identified, which are considered as an alternative to cyanide salts in the processes of extracting gold from ore raw materials. The search for and evaluation of noble metal solvents is made not only for environmental reasons, but also pursues other goals, for example, the possibility of processing gold-bearing ores that are difficult to leach to cyanide. In relation to this type of ore of interest is a range of solvents, among which the thiocarbamide leaching attracts the most attention. Thiocarbamide (thiourea) CS(NH2)2 represents a crystalline powder, which dissolves well in water. For the leaching of gold use a solvent containing 0,5-2% CS(NH2)2, 1% H2SO4, 0,3-0,4% Fe2(SO4)3. Iron oxide sulfate is an oxidizing agent. Ores with a noticeable amount of acid soluble minerals before leaching using thiourea must be subjected to acid processing with a following washing with water, otherwise these minerals will cause a high consumption of thiourea and, passing into solution, will slow down dissolution of gold. Processing with the use of thiourea has been held at a temperature not higher than 20-25°С to avoid excessive solvent degradation. Thiourea pulps differ in difficult condensability and filterability, therefore during the processing of them is need to use polyacrylamide and other flocculants. Compared to cyanation, ore processing with the use of thiourea has following advantages: a higher degree of gold leaching, relatively low level of thiourea toxicity and the gold extraction from clay ores reaches 97-98%. Foreign researchers suppose that this reagent as a solvent of noble metals is the most promising for the heap and underground leaching. Thiourea leaching is accompanied by a range of disadvantages:  relatively high cost and deficiency of the reagent;  need of acidproof equipment;  significant acid consumption (120-180 kg/t H2SO4);  decomposition (oxidation) of thiourea that leads to the increase of that solvent's consumption and cakes' wash till the neutral medium. The presence of compounds of antimony, cuprum, arsenic and some other mineral impurities using thiosulfate solvents (Na2S2O3 with a concentration of 36 g/l; oxidizing agent CuSO4 4 g/l; environment regulator -10 g/l NH4OH) do not render a noticeable depressive impact on gold in leaching. For achievement of acceptable indicators for the gold extraction into solvents the increase in temperature till 100-130°C is necessary. Extraction of gold into a solvent is up to 95-97%. As further studies have shown, thiosulfate leaching might also be implemented at lower temperatures at the expense of significant dilution of pulp (to S:L = 1:10), increase in concentration of solvent or application of combined process to ores: leaching -sorption of gold from pulp with the use of ionexchange resins. However, such conditions lead to the dramatically increase of the reagent consumption and general ore processing expenses. Specified circumstances substantially obstruct the technological use of this reagent [7], [8]. Alkaline solutions are formed by the interaction of elemental sulfur with solutions of various hydroxides. These are multicomponent systems containing mono-and polysulfides in various ratios, metal thiosulfates, and free alkali. In the interaction of elemental sulfur with an aqueous suspension of calcium hydroxide, lime-sulfur decoction is formed. It is a cherry-red liquid containing hydrosulfide ion (HS-), thiosulfate ion S2O3 polysulfide Sn2-. Sulfur and calcium hydroxide concentrations ranged from 25-200 g / l sulfur and from 50-200 g / l Ca(OH)2. The leaching process was carried out at room temperature for 24 hours at a ratio S:L = 1:3 in bottle-type agitators. Class -4 + 0.074 mm was crushed to a particle size of -0.074 + 0 mm and combined with small classes of the original ore. The results of the studies showed that the dissolution of gold with an alkaline solution proceeds over the entire range of concentrations of sulfur and calcium hydroxide. The optimal composition of this solvent is the concentration of sulfur from g / l, calcium hydroxide from 100-200 g / l, depending on the composition of the processed raw materials. The duration of leaching ranged from 5-7 hours. With the increase in the duration of the process, the extraction efficiency of gold has not changed. The residual gold content in the cakes was 0.0006-0.001 g / t. As studies have shown, with an increase in the alkali content in the serum alkaline solution, the extraction of gold in the filtrate decreases, since the concentration of the polysulfide ion Sn2-and hydrosulfide ion HS-decreases. With increasing sulfur concentration, the transition of gold into the filtrate also decreases, since in this case, along with polysulfide-Sn2-, and hydrosulfide-HS-, thiosulfate-S O2 3 ions, sulfate SO42-and sulfide S2-ions are formed, leading to the precipitation of slightly soluble sulfates and sulfides calcium. Extraction of gold in the filtrate was 97-98%. Cakes were obtained with a residual gold content of up to 0.006-0.001 g / t. Conclusion Chemical analysis data on the material composition of cakes indicate the selective nature of the reagent's action on the original ore. The reagent dissolves the metal, without entering into chemical interaction with other elements in the original ore (sulfur, arsenic, titanium, etc.), which are transferred to dump cakes. The analysis results show that with the content of elemental sulfur in the initial ore up to 0.8%, its content in dump cakes varies from 0.14 to 1.14%, averaging 0.64%, i.e. actually corresponds to the content in the feedstock. From this, it follows that in the leaching process there is no transfer of sulfur from the process reagent in the form of sulphate ions to dump cakes. Moreover, the cakes are not environmentally harmful and can be stored as substandard ores at specially prepared sites. According to the results of the research, the technological regulations for the extraction of gold from refractory ores have been drawn up [4]. Consequently, the proposed gold leaching technology has an undoubted advantage over cyan technology in both technological and environmental aspects, since it excludes such an element as storing and storing cyanide tails from the ore processing process, the need to develop special security measures when working with cyanides. In environmental terms, this technology is not dangerous to the environment and can be recommended for use in production.
2019-12-12T10:14:23.793Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "c805b90207c9baa31ae229bacbd4033da0ddf625", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1399/5/055040", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fb8f671a1587337f0eb4224743cd098f17f86f3a", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Engineering" ] }
55505886
pes2o/s2orc
v3-fos-license
A to Z of Flavour with Pati-Salam We propose an elegant theory of flavour based on $A_4\times Z_5$ family symmetry with Pati-Salam unification which provides an excellent description of quark and lepton masses, mixing and CP violation. The $A_4$ symmetry unifies the left-handed families and its vacuum alignment determines the columns of Yukawa matrices. The $Z_5$ symmetry distinguishes the right-handed families and its breaking controls CP violation in both the quark and lepton sectors. The Pati-Salam symmetry relates the quark and lepton Yukawa matrices, with $Y^u=Y^{\nu}$ and $Y^d\sim Y^e$. Using the see-saw mechanism with very hierarchical right-handed neutrinos and CSD4 vacuum alignment, the model predicts the entire PMNS mixing matrix and gives a Cabibbo angle $\theta_C\approx 1/4$. In particular it predicts maximal atmospheric mixing, $\theta^l_{23}=45^\circ\pm 0.5^\circ$ and leptonic CP violating phase $\delta^l=260^\circ \pm 5^\circ$. The reactor angle prediction is $\theta^l_{13}=9^\circ\pm 0.5^\circ$, while the solar angle is $34^\circ \geq \theta^l_{12}\geq 31^\circ$, for a lightest neutrino mass in the range $0 \leq m_1 \leq 0.5$ meV, corresponding to a normal neutrino mass hierarchy and a very small rate for neutrinoless double beta decay. Introduction The problem of understanding the quark and lepton masses, mixing angles and CP violating phases remains one of the most fascinating puzzles in particle physics. Following the discovery of a Standard Model (SM)-like Higgs boson at the LHC [1], it seems highly plausible that quark masses, mixing angles and CP phase originate from Yukawa couplings to a Higgs field. However the SM offers absolutely no insight into the origin or nature of these Yukawa couplings, motivating approaches beyond the SM [2]. In the quark sector, the Yukawa couplings are organised into 3 × 3 quark Yukawa matrices Y u and Y d , which must be responsible for the quark mass hierarchies and small quark mixing angles, together with the CP phase. Similarly, the charged lepton Yukawa matrix Y e must lead to a mass hierarchy similar to that of the down-type quarks. The origin of small quark mixing and CP violation and the strong mass hierarchies of the quarks and charged leptons, with an especially strong hierarchy in the up-type quark sector, is simply unexplained within the SM. The nine charged fermion masses, three quark mixing angles, including the largest Cabibbo angle θ C ≈ 13 • , and the CP phase are all determined from experiment. From a more fundamental point of view, the three Yukawa matrices Y u , Y d and Y e contain 54 undetermined Yukawa couplings leading to 13 physical observables with a calculable scale dependence [3]. Following the discovery of atmospheric neutrino oscillations by Super-Kamiokande in 1998 and solar neutrino oscillations by SNO in 2002 [4], Daya Bay has recently accurately measured a non-zero reactor angle [5] which rules out tri-bimaximal (TB) [6] mixing. However, recent global fits [7,8,9,10] are consistent with tri-bimaximal-Cabibbo (TBC) [11] mixing, based on the TB atmospheric angle θ l 23 ≈ 45 • , the TB solar angle θ l 12 ≈ 35 • and a reactor angle θ l 13 ≈ θ C / √ 2 ≈ 9 • . The extra parameters of the lepton sector include three neutrino masses, three lepton mixing angles and up to three CP phases, although no leptonic CP violation has yet been observed and the lightest neutrino mass has not been measured. The 9 additional neutrino observables, together with the 13 physical observables in the charged fermion sector, requires 22 unexplained parameters in the flavour sector of the SM. This provides a powerful motivation to search for theories of flavour (TOF) based on discrete family symmetry which contain fewer parameters [12]. The origin of neutrino mass is presently unknown and certainly requires some extension of the SM, even if only by the addition of right-handed (RH) neutrinos which are singlets under the SM gauge group. Since such RH neutrinos may have large Majorana masses, in excess of the electroweak breaking scale, such a minimal extension naturally leads to the idea of a see-saw mechanism [13], resulting from a neutrino Yukawa matrix Y ν , together with a complex symmetric Majorana matrix M R of heavy right-handed neutrinos, leading to a light effective Majorana neutrino mass matrix m ν ∼ v 2 Y ν M −1 R Y ν T , where v is the Higgs vacuum expectation value (VEV). However the see-saw mechanism does not explain large lepton mixing angles, with the smallest being the reactor angle θ l 13 ≈ 9 • , nor does it address any of the flavour puzzles in the charged fermion sector. The origin of large lepton mixing may be accounted for within the see-saw mechanism with the aid of sequential dominance (SD) [14]. For example, with an approximately diagonal M R , the lightest right-handed neutrino ν atm R may give the dominant contribution to the atmospheric neutrino mass m 3 , the second lightest right-handed neutrino ν sol R to the solar neutrino mass m 2 and the heaviest, almost decoupled, right-handed neutrino ν dec R may be responsible for the lightest neutrino mass m 1 . The immediate prediction of SD is a normal neutrino mass hierarchy, m 3 > m 2 m 1 , which will be tested in the near future. However SD also provides a simple way to account for maximal atmospheric mixing and tri-maximal solar mixing by adding constraints to the first two columns of the neutrino Yukawa matrix Y ν , with the third column assumed to be approximately decoupled from the see-saw mechanism. In the diagonal Y e basis, if the dominant first column of Y ν is proportional to (0, 1, 1) T then this implies a maximal atmospheric angle tan θ l 23 ≈ 1 [15]. This could be achieved with a non-Abelian family symmetry such as A 4 [16], if the first column is generated by a triplet flavon field with a vacuum alignment proportional to (0, 1, 1) T . In such models, it has been shown that the vacuum alignment completely breaks the A 4 symmetry, and such models are therefore referred to as "indirect" models [17]. Such "indirect" models are highly predictive and do not require such large discrete groups as the "direct" models where the Klein symmetry of the neutrino mass matrix is identified as a subgroup of the family symmetry [18,19,20]. "Indirect" models of leptons have been constructed based on A 4 using both CSD3 [23] and CSD4 [24] since these are the most promising from the point of view of the reactor angle. From the point of view of extending to the quark sector, CSD4 seems to be the most promising since in unified models with Y u = Y ν , the second column is proportional to (1,4,2) T . This simultaneously provides a prediction for both lepton mixing and the Cabibbo angle θ C ≈ 1/4 in the diagonal Y d ∼ Y e basis [25]. The model in [25] was based on A 4 family symmetry with Z 4 3 × Z 5 5 and quark-lepton unification via the Pati-Salam (PS) [26] gauge subgroup SU (4) P S × SU (2) L × U (1) R and the CSD4 alignment (1,4,2). The small quark mixing angles arose from higher order (HO) corrections appearing in Y u and Y ν , providing a theoretical error or noise which blurred the PMNS predictions. Here we discuss an alternative A 4 model which has three advantages over the previous model. Firstly it is more unified, being based on the full PS gauge group SU (4) P S × SU (2) L × SU (2) R [26]. Secondly it introduces only a single Z 5 symmetry, replacing the rather cumbersome Z 4 3 × Z 5 5 symmetry. Thirdly, it accounts for small quark mixing angles already at the leading order (LO), with all Higher Order (HO) corrections being rather small, leading to more precise predictions for the PMNS parameters, such as maximal atmospheric mixing. Unlike other A 4 ×PS models (see e.g. [27]), the present model does not involve any Abelian U (1) family symmetry. Instead the left-handed PS fermions are unified into a triplet of A 4 while the right-handed PS fermions are distinguished by Z 5 , as in Fig. 1. In the present paper, then, we propose a rather elegant TOF based on the PS gauge group combined with a discrete A 4 × Z 5 family symmetry. PS unification relates quark and lepton Yukawa matrices and in particular predicts equal up-type quark and neutrino Yukawa matrices Y u = Y ν , leading to Dirac neutrino masses being equal to up, charm and top masses. The see-saw mechanism then implies very hierarchical right-handed neutrinos. The A 4 family symmetry determines the structure of Yukawa matrices via the CSD4 vacuum alignment [23,24], with the three columns of Y u = Y ν being proportional to (0, 1, 1) T , (1, 4, 2) T and (0, 0, 1) T , respectively, where each column has an overall phase determined by Z 5 breaking, which controls CP violation in both the quark and lepton sectors. The down-type quark and charged lepton Yukawa matrices are both approximately equal and diagonal Y d ∼ Y e , but contain small off-diagonal elements responsible for the small quark mixing angles θ q 13 and θ q 23 . The model predicts the Cabibbo angle θ C ≈ 1/4, up to such small angle corrections. The main limitation of the model is that it describes the fermion masses and small quark mixing angles by 16 free parameters. The main success of the model is that, since there are 6 fewer parameters than the 22 flavour observables, it predicts the entire PMNS lepton mixing matrix including the three lepton mixing angles and the three leptonic CP phases. The model may be tested quite soon via its prediction of maximal atmospheric mixing with a normal neutrino mass hierarchy. The layout of the remainder of the paper is as follows. In Section 2, we give a brief overview of the essential features of the model. In Section 3, we present the full model and show how the messenger sector can lead to effective operators, then discuss how these operators lead to Yukawa and Majorana mass matrices. In Section 4, we derive the quark masses and mixing, including CP violation, arising from the quark Yukawa matrices, first analytically, then numerically. In Section 5, we implement the see-saw mechanism, then consider the resulting neutrino masses and lepton mixing, with modified Georgi-Jarlskog relations, before performing a full numerical analysis of neutrino masses and lepton mixing, including CP violation. In Section 6, we consider higher order corrections to the results and show that they are small due to the particular messenger sector. Finally Section 7 concludes the paper. A 4 group theory is discussed in Appendix A and the origin of the light Higgs doublets H u and H d in Appendix B. 2 Overview of the model Symmetries of the model The model is based on the Pati-Salam gauge group [26], with A 4 × Z 5 family symmetry, The quarks and leptons are unified in the PS representations as follows, where the SM multiplets Q i , L i , u c i , d c i , ν c i , e c i resulting from PS breaking are also shown and the subscript i (= 1, 2, 3) denotes the family index. The left-handed quarks and leptons form an A 4 triplet F , while the three (CP conjugated) right-handed fields F c i are A 4 singlets, distinguished by Z 5 charges α, α 3 , 1, for i = 1, 2, 3, respectively. Clearly the Pati-Salam model cannot be embedded into an SO(10) Grand Unified Theory (GUT) since different components of the 16-dimensional representation of SO(10) would have to transform differently under A 4 × Z 5 , which is impossible. On the other hand, the PS gauge group and A 4 could emerge directly from string theory (see e.g. [28]). Pati-Salam breaking The Pati-Salam gauge group is broken at the GUT scale to the SM, by PS Higgs, H c and H c , These acquire vacuum expectation values (VEVs) in the "right-handed neutrino" directions, with equal VEVs close to the GUT scale 2 × 10 16 GeV, so as to maintain supersymmetric gauge coupling unification. Since the PS Higgs fields do not carry any A 4 × Z 5 charges, the potential responsible for supersymmetric PS breaking considered in [29] is assumed to be responsible for PS breaking here. CP violation Our starting point is to assume that the high energy theory, above the PS breaking scale, conserves CP [30]. We shall further assume that CP is spontaneously broken by the complex VEVs of scalar fields which spontaneously break A 4 and Z 5 . The scalars include A 4 triplets φ ∼ 3, A 4 singlets ξ ∼ 1, and other one dimensional A 4 representations such as Σ u ∼ 1 and Σ d ∼ 1 . In addition all of the above fields carry Z 5 charges denoted as the powers α n , where α = e 2πi/5 and n is an integer. For example ξ ∼ α 4 under Z 5 . The group theory of A 4 is reviewed in Appendix A, while Z 5 corresponds to α 5 = 1. Under a CP transformation, the A 4 singlet fields transform into their complex conjugates [31], where the complex conjugate fields transform in the complex conjugate representations under On the other hand, in the Ma-Rajarsakaran [16] basis of Appendix A, for A 4 triplets φ ∼ (φ 1 , φ 2 , φ 3 ), a consistent definition of CP symmetry requires the second and third triplet components to swap under CP [31], CP violation has also been considered in a variety of other discrete groups [32]. With the above definition of CP, all coupling constants g and explicit masses m are real due to CP conservation and the only source of phases can be the VEVs of fields which break In the model of interest, all the physically interesting CP phases will arise from Z 5 breaking as in [30]. For example, consider the A 4 singlet field ξ which carries a Z 5 charge α 4 . The VEV of this field arises from Z 5 invariant quintic terms in the superpotential [30], where, as in [30], P denotes a singlet and the coupling g and mass m are real due to CP conservation. The F-term condition from Eq.8 is, This is satisfied, for example, by ξ = |(Λ 3 m 2 ) 1/5 |e −4iπ/5 , where we arbitrarily select the phase to be −4π/5 from amongst a discrete set of five possible choices, which are not distinguished by the F-term condition, as in [24]. We emphasise that CP breaking is controlled by the Abelian Z 5 symmetry rather than the non-Abelian A 4 symmetry. Vacuum alignment Let us now consider the A 4 triplet fields φ which also carry Z 5 charges. In the full model there are four such triplet fields, or "flavons", denoted as φ u The idea is that φ u i are responsible for up-type quark flavour, while φ d i are responsible for down-type quark flavour. These VEVs are driven by the superpotential terms, where P ij are linear combinations of singlets as in [24]. The coupling constants g ij , mass parameters M ij and cut-off scale Λ are enforced to be real by CP while the fields φ u i and φ d i will develop VEVs with quantised phases. If we assume that φ u i both have the same phase, e imπ/5 , then Eq.10 implies that φ d i should have phases e inπ/5 such that where n, m are positive or negative integers. The structure of the Yukawa matrices depends on the so-called CSD4 vacuum alignments of these flavons which were first derived in [24], and we assume a similar set of alignments here, although here the overall phases are quantised due to Z 5 , and We note here that the vacuum alignments in Eq.13 and the first alignment in Eq.12 are fairly "standard" alignments that are encountered in tri-bimaximal mixing models, while the second alignment in Eq.12 is obtained using orthogonality arguments, as discussed in [24], to which we refer the interested reader for more details. Two light Higgs doublets The model will involve Higgs bi-doublets of two kinds, h u which lead to up-type quark and neutrino Yukawa couplings and h d which lead to down-type quark and charged lepton Yukawa couplings. In addition a Higgs bidoublet h 3 , which is also an A 4 triplet, is used to give the third family Yukawa couplings. After the PS and A 4 breaking, most of these Higgs bi-doublets will get high scale masses and will not appear in the low energy spectrum. In fact only two light Higgs doublets will survive down to the TeV scale, namely H u and H d . The precise mechanism responsible for this is quite intricate and is discussed in Appendix B. Analogous Higgs mixing mechanisms are implicitly assumed in many models, but are rarely discussed explicitly (however for an example within SO(10) see [33]). The basic idea is that the light Higgs doublet H u with hypercharge Y = +1/2, which couples to up-type quarks and neutrinos, is a linear combination of components of the Higgs bi-doublets of the kind h u and h 3 , while the light Higgs doublet H d with hypercharge Y = −1/2, which couples to down-type quarks and charged leptons, is a linear combination of components of Higgs bi-doublets of the kind h d and h 3 , Yukawa operators The renormalisable Yukawa operators, which respect PS and A 4 symmetries, have the following form, leading to the third family Yukawa couplings shown, using Eqs.2,14, where we have used Eqs.2,14. The non-renormalisable operators, which respect PS and A 4 symmetries, have the following form, where i = 1 gives the first column of each Yukawa matrix, while i = 2 gives the second column and we have used Eqs.2,14. Thus the third family masses are naturally larger since they correspond to renormalisable operators, while the hierarchy between first and second families arises from a hierarchy of flavon VEVs. Yukawa matrices Inserting the vacuum alignments in Eqs.12 and 13 into Eqs.16 and 17, together with the renormalisable third family couplings in Eq.15, gives the Yukawa matrices of the form, The PS unification predicts the equality of Yukawa matrices Y u = Y ν and Y d ∼ Y e , while the A 4 vacuum alignment predicts the structure of each Yukawa matrix, essentially identifying the first two columns with the vacuum alignments in Eqs.12 and 13. With a diagonal right-handed Majorana mass matrix, Y ν leads to a successful prediction of the PMNS mixing parameters [24]. Also the Cabibbo angle is given by θ C ≈ 1/4 [25]. Thus Eq.18 is a good starting point for a theory of quark and lepton masses and mixing, although the other quark mixing angles and the quark CP phase are approximately zero. However above discussion ignores the effect of Clebsch factors which will alter the relationship between elements of Y d and Y e , which also include off-diagonal elements responsible for small quark mixing angles in the full model. The Model The most important fields appearing in the model are defined in Table 1. In addition to the fields introduced in the previous overview, the full model involves Higgs bi-doublets h 15 in the adjoint of SU (4) C , as well as messenger fields X with masses given by the VEV of dynamical fields Σ. The effective non-renormalisable Yukawa operators therefore arise from a renormalisable high energy theory, where heavy messengers X with dynamical masses Σ are integrated out, below the energy scale Σ . Operators from Messengers Although the Yukawa operators in the up sector of the full model turn out to be the same as in Eq.16, the Yukawa operators in the down sector of the full model will involve Clebsch factors which will imply that Y d and Y e are not equal. In addition Y d and Y e will involve off-diagonal elements which however are "very small" in the sense that they will give rise to the small quark mixing angles of order V ub and V cb . The Cabibbo angle arises predominantly from the second column of Y u , with the prediction V us ∼ 1/4 being corrected by the very small off-diagonal elements of Y d . The allowed Yukawa operators arise from integrating out heavy fermion fields called "messengers" and will depend on the precise choice of fermion messengers. In Table 1 we have allowed messengers of the form X F i for charges α i (i = 1, . . . , 4), with a very restricted set of messengers X F 1 (X F 1 ) and X F 3 (X F 3 ) with charges α and α 3 , in the 1 (1 ) representation of A 4 . The assumed messengers X F i have allowed couplings to φF as follows, The messengers X F and X F have allowed couplings to hF c i as follows, The messengers couple to each other and become heavy via the dynamical mass fields Σ which appear in Table 1, Figure 3: The fermion messenger diagrams responsible for the operators which lead to the diagonal charged lepton and down type quark masses. The fermions depicted by the solid line have even R-parity. The leading order operators responsible for the Yukawa couplings involving the first and second families to Higgs fields are obtained by integrating out the heavy messengers, leading to effective operators. The diagrams in Fig.2 yield the following operators which will be responsible for the up-type quark and neutrino Yukawa couplings, The above operators are similar to those in Eq.16 and will yield a Yukawa matrix Y u = Y ν as in Eq.18. The diagrams in Fig.3 yield the operators which will be responsible for the diagonal down-type quark and charged lepton Yukawa couplings, These operators are similar to those in Eq.17 and will yield Yukawa matrices similar to those in Eq.18 but with Y d = Y e due to the Clebsch-Gordan coefficients from the Higgs in the 15 dimensional representation of SU (4) C . In addition, the above messenger sector generates further effective operators which give rise to off-diagonal down-type quark and charged lepton Yukawa couplings, Figure The operators responsible for the heavy Majorana neutrino masses are given by, corresponding to the diagrams in Fig.4. These operators are mediated by the singlet messengers X ξ i and involve the explicit messenger mass scale Λ which may take values higher than the A 4 ×Z 5 and Pati-Salam breaking scales. The first three of these operators are controlled by the Majoron fields ξ i in Table 1, which carries a non-trivial phase due to the Z 5 symmetry, as discussed later. Note that the dynamical mass Σ fields do not enter the Majorana sector since they transform under A 4 as 1 , 1 and hence do not couple to pairs of X ξ i . Also note that the Majoron ξ fields which transform under A 4 × Z 5 as ξ ∼ (1, α 4 ) do not enter the charged fermion sector since they do not couple X F i to the messengers X F and X F which transform under A 4 as 1 and 1 . Yukawa and Majorana mass matrices According to the mechanism discussed in Appendix B, the four Higgs multiplets in the fourth block of With the vacuum alignments in Eq. 12, the operators in Eq. 22 then result in nondiagonal and equal up-type quark and neutrino Yukawa matrices, where, Note that since Y u = Y ν , the up-type quark masses are equal to the Dirac neutrino masses, From Eq.27 the up-type quark masses are given to excellent approximation by, The Yukawa coupling eigenvalues for up-type quarks are given by, where we have inserted some typical up-type quark Yukawa couplings, hence, where the ratio of up to charm masses is accounted for by the 2% ratio of flavon VEVs. Similarly, with the vacuum alignments in Eqs. 12,13, the operators in Eqs. 23,24 then result in down-type quark and charged lepton Yukawa matrices related by Clebsch factors, where the diagonal Yukawa couplings for down-type quarks are given by, where u,d were defined in Eq.26 and for low tan β we have inserted some typical downtype quark Yukawa couplings, assuming that the mixing angles are small. The offdiagonal entries to the down-type quark and charged lepton Yukawa matrices are given by, where, From Eq.33 the diagonal down-type quark and charged lepton Yukawa couplings are related by, These are the well-known Georgi-Jarlskog (GJ) relations [34], although the factor of 1/3 which appears in the first relation above arises from a new mechanism, namely due to non-singlet fields which appear in the denominator of effective operators as discussed in detail in [35]. The viablity of the GJ relations for mass eigenstates is discussed in [3]. However here there are small off-diagonal entries in the Yukawa matrices which will provide corrections to the mass eigenstates, as well as other corrections to the GJ relations, as discussed later. Finally, from Eq.25, we find the heavy Majorana mass matrix, The heavy Majorana neutrino masses from Eq.25 are in the ratios, where,ξ There is a competing correction to M 1 coming from the off-diagonal element, namely M 2 13 /M 3 ∼ξ 2 with the same phase, which may be absorbed into the definition of the lightest right-handed neutrino mass. Since we need to have a strong hierarchy of righthanded neutrino masses we shall require (see later), which may be achieved for example by, Typically the heaviest right-handed neutrino mass is given by, which is within an order of magnitude of the Pati-Salam breaking scale in Eq.5. This implies that Λ ∼ 5.10 16 GeV and hence, from Eq.42, The Majoron fields ξ act like a dynamical mass for M 2 , with an effective coupling ξN c 2 N c 2 with a coupling constant of about 0.1. In principle they could play a role in leptogenesis. For example, the effect of Majorons on right-handed neutrino annihilations, leading to possibly significantly enhanced efficiency factors, was recently discussed in [36]. Convention We shall use the convention for the quark Yukawa matrices, which are diagonalised by, The CKM matrix is then given by, In the PDG parameterization [37], in the standard notation, U CKM = R q 23 U q 13 R q 12 in terms of s q ij = sin(θ q ij ) and c q ij = cos(θ q ij ) and the CP violating phase δ q . Analytic estimates for quark mixing In the above convention, the quark Yukawa matrices differ from those given in Eqs.27,33 by a complex conjugation, 1 where the parameters defined in Eqs.31,34,35 are given below, where we have displayed the phases from Eqs.12,13 explicitly in the new convention. Cabibbo mixing clearly arises predominantly from the up-type quark Yukawa matrix Y u , which leads to a Cabibbo angle θ C ≈ 1/4 or θ C ≈ 14 • . The other quark mixing angles and CP violating phase arise from the off-diagonal elements of Y d , which also serve to correct the Cabibbo angle to yield eventually θ C ≈ 13 • . Recall that any 3×3 unitary matrix U † can be written in terms of three angles θ ij , three phases δ ij (in all cases i < j) and three phases ρ i in the form [15], where and similarly for U 13 , U 23 , where s ij = sin θ ij and c ij = cos θ ij and the angles can be made positive by a suitable choice of the δ ij phases. We use this parameterisation for 1 The complex conjugation of the Yukawa matrices arises from the fact that the Yukawa matrices given in Eqs. 27,33 correspond involving the unbarred left-handed and CP conjugated right-handed fields. Note that our LR convention for the quark Yukawa matrices in Eq.45 differs by an Hermitian conjugation compared to that used in the Mixing Parameter Tools package [38] due to the RL convention used there. both U † u L and U † d L , where the phases ρ i can be absorbed into the quark mass eigenstates, leaving where U † u L contains θ u ij and δ u ij , while U † d L contains θ d ij and δ d ij . The CKM matrix before phase removal may be written as On the other hand, U CKM can be also parametrised as in Eq. (52), The angles θ ij are the standard PDG ones in U CKM , and five of the six phases of U CKM in Eq. (56) may be removed leaving the standard PDG phase in U CKM identified as [15]: In the present case, given Y u , it is clear that θ u 13 ≈ θ u 23 ≈ 0. Similary, given Y d , we see that θ d 12 ≈ 0. This implies that Eq. (55) simplifies to: Then, by equating the right-hand sides of Eqs. (56) and (58) and expanding to leading order in the small mixing angles, we obtain the following relations: from which we deduce, where, Notice from Eqs.62,63 that the magnitudes of the Yukawa matrix elements are all approximately fixed in terms of physical quark mixing parameters, Since |y 0 d /y 0 b | ∼ 0.001, Eq.64 implies that, where the last relation uses Eq.36. Concerning the phases, from Eq.51 we find, in the convention of Eq.53, where, from Eq.11, n + m is a multiple of 5. Hence, from Eqs.59,60,61, so the physical CP phase is given by the very approximate expression, Clearly CP violation requires n = m, indeed δ q only depends on the difference n − m with a positive value of δ q ∼ 7π 18 in the first quadrant requiring n < m. Since n + m must be a multiple of 5, then the only possibility is n = 2, m = 3 which corresponds to one of the discrete choices of phases in Eq.11. Numerical results for quark mixing With the phases fixed by the choice of discrete choice of phases n = 2, m = 3, as discussed in the previous subsection, the only free parameters are a, b, c in the up sector, and A, B, C and y 0 d , y 0 s , y 0 b in the down sector matrices, where we have explicitly removed the phases from these parameters, in order to make them real, Note that we have introduced a small correction term in the (1, 3) entry of Y u which will mainly affect θ q 13 . Physically this corresponds to a small admixture of the first component of the Higgs triplet h 3 contributing to the physical light Higgs state H u , as discussed in Appendix B. The previous analytic results were for = 0, but we find numerically that the best fit to CKM parameters requires a non-zero value of . Although the quark results are insensitive to the sign of y 0 b , the lepton sector results lead to a better fit with the negative sign of y 0 b as discussed later. Using the Mixing Parameter Tools (MPT) package [38], in Fig.5 we show the CKM parameters for different choices of A, B as a function of C. θ q 23 is really only sensitive to C only, while θ q 12 is mainly sensitive to B. θ q 13 and δ q are both sensitive A. The effect of the correction is to shift the blue dashed curve to the red solid curve, lowering θ q 13 while leaving θ q 23 almost unchanged, allowing the best fit of the CKM parameters for C = 36. To take a concrete example, for the red solid at the value C = 36, with the above input parameters A = 9, B = 7 (c.f. Eq.65) and = −2.4 × 10 −3 , we find the quark Yukawa eigenvalues at the high scale, These parameters are consistent with those given, for example, in [3], after including RG corrections, in particular due to the large top Yukawa coupling. Notice that there are as many input parameters as there are physical observables in the quark sector, so no prediction is claimed. However we emphasise two interesting features, firstly that the Cabibbo angle is understood to arise from Y u leading to θ C ≈ 1/4 or θ C ≈ 14 0 , with a small (one degree) correction mainly controlled by B. Secondly the phases which appear are quantised according to Z 5 , which also controls the leptonic phases as discussed in the following subsection. Indeed, with Y ν = Y u fixed by the quark sector, the entire neutrino sector only depends on three additional right-handed neutrino masses, which determine the three physical neutrino masses, with the entire neutrino mixing matrix then being fully determined, with only very small charged lepton mixing corrections appearing in the PMNS mixing matrix. Lepton masses and mixing In this section we discuss the leading order predictions for PMNS mixing which arise from the neutrino Yukawa and Majorana matrices in Eq.27 which result in a very simple form of effective neutrino mass matrix, after the see-saw mechanism has been applied. Convention The neutrino Yukawa matrix Y ν is defined in a LR convention by 2 where α = e, µ, τ labels the three left-handed neutrinos and i = 1, 2, 3 labels the three right-handed neutrinos. The physical effective neutrino Majorana mass matrix m ν is determined from the columns of Y ν via the see-saw mechanism, where the light Majorana neutrino mass matrix m ν is defined by 3 L ν = − 1 2 m ν ν L ν c L + h.c., while the heavy right-handed Majorana neutrino mass matrix M R is defined by The PMNS matrix is then given by We use a standard parameterization U PMNS = R l 23 U l 13 R l 12 P l in terms of s l ij = sin(θ l ij ), c l ij = cos(θ l ij ), the Dirac CP violating phase δ l and further Majorana phases contained in P l = diag(e i β l 1 2 , e i β l 2 2 , 1). The standard PDG parameterization [37] differs slightly due to the definition of Majorana phases which are by given by P l PDG = diag(1, e i α 21 2 , e i α 31 2 ). Evidently the PDG Majorana phases are related to those in our convention by α 21 = β l 2 − β l 1 and α 31 = −β l 1 , after an overall unphysical phase is absorbed by U e L . See-saw mechanism The neutrino Yukawa and Majorana matrices are as in Eq.27, with Y ν = Y u in Eq.69, where we have ignored the small off-diagonal Majorana mass M 13 which gives a tiny mixing correction of order 10 −5 from Eq.42, and dropped the correction which is completely negligible in the lepton sector due to sequential dominance (see below). We have also assumed a phase in the Majoron VEV ξ ∼ e 4iπ/5 in the operators in Eq.25 responsible for the right-handed neutrino masses, as discussed below. Using Eq.79, the see-saw formula in Eq.76 leads to the neutrino mass matrix m ν , where, are three real parameter combinations which determine the three physical neutrino masses m 1 , m 2 , m 3 , respectively. According to sequential dominance m c will determine the lightest neutrino mass m 1 where we will have m 1 m 2 < m 3 , so that the third term arising from the heaviest right-handed neutrino of mass M 3 is approximately decoupled from the see-saw mechanism. (This is why the correction is completely negligible in the lepton sector.) In order to understand the origin of the relative phases η = 2π/5 which enter the neutrino mass matrix m ν , it is worth recalling that the see-saw operators responsible for the dominant first two terms of the neutrino mass matrix in Eq.80 have the form where we have written φ atm = φ u 1 , φ sol = φ u 2 to highlight the fact that the first term gives the dominant contribution to the atmospheric neutrino mass m 3 , while the second term controls the solar neutrino mass m 2 . The mild neutrino hierarchy between m 3 and m 2 emerges due to the choice of Majoron VEV ξ in Eq.42 which partly cancels the hierarchy in the square of the flavon VEVs in Eq.32. The lightest neutrino mass m 1 arises from smaller terms (not shown), leading to a normal neutrino mass hierarchy, where the heaviest atmospheric neutrino mass m 3 is associated with the lightest righthanded neutrino mass M 1 as in light sequential dominance [14]. Since φ atm and φ sol have the same phase, e −i3π/5 , and ξ has a phase 4 e 4iπ/5 , Eq.82 shows that the atmospheric term has a phase (e −i3π/5 ) 2 /(e 4iπ/5 ) 2 = e −14iπ/5 , while the solar term is real. After multiplying m ν by an overall phase e 4iπ/5 , which we are allowed to do since overall phases are irrelevant, the atmospheric term becomes real, while the other two terms pick up phases of e 4iπ/5 . This is equivalent to having a phase η = 2π/5 in Eq. 80. Different choices of phase for η are theoretically possible, but the phenomenologically successful choice for the relative phase of the atmospheric and solar terms (the first and second terms in Eq.80) is η = 2π/5, whereas for example η = −2π/5 leaves the mixing angles unchanged but reverses the sign of the CP phases [23,24,25]. The dependence on see-saw phases was fully discussed in [23]. Here we only note that in this model the see-saw phases are restricted to a discrete choice corresponding to the fifth roots of unity due to the Z 5 symmetry. The fact that the decoupled third term proportional to m c (responsible for the lightest neutrino mass m 1 ) has the same phase as the second term proportional to m b (responsible for the solar neutrino mass) is a new prediction of the current model and will affect the m 1 dependence of the results. From Eqs.29,30, the Dirac neutrino masses are equal to the up-type quark masses which are related to a, b, y t and hence Eq.81 becomes, Using Eq.83, the three right-handed neutrino masses M 1 , M 2 , M 3 may be determined for particular values of m a , m b , m c , and the known quark masses m u , m c , m t (evaluated at high scales). The neutrino mass matrix in Eq.80 may be diagonalised numerically to determine the physical neutrino masses and the PMNS mixing matrix as in Eq.77. We emphasise that, at leading order, with the phase η = 2π/5 fixed by the previous argument, the neutrino mass matrix involves just 3 real input parameters m a , m b , m c from which 12 physical parameters in the lepton sector are predicted, comprising 9 lepton parameters from diagonalising the neutrino mass matrix m ν in Eq.80 (the 3 angles θ l ij , 3 phases δ l , β l 1 , β l 2 and the 3 light neutrino masses m i ) together with the 3 heavy right-handed neutrino masses M i from Eq.83. The model is clearly highly predictive, involving 12 predictions in the lepton sector from only 3 input parameters. A first numerical example To take a numerical example, diagonalising the neutrino mass matrix in Eq.80, with the three input parameters the Mixing Parameter Tools package [38] gives the physical neutrino masses, Eq.83 then determines the three right-handed neutrino masses to be, Eq.84 shows the 3 input parameters, while Eqs.85, 87, 89 shows the 12 output predictions. One may regard the 3 input parameters in Eq.84 as fixing the 3 light physical neutrino masses in Eq.85, with all the 6 PMNS matrix parameters in Eq.87 as being independent predictions, along with the 3 right-handed neutrino masses in Eq.89. So far we have ignored charged lepton corrections which are expected in the model to be small. However the corrections are not entirely negligible as the following example shows. The charged lepton Yukawa matrix is given from Eq.33, which should be compared to the down quark Yukawa matrix in Eq.70. The off-diagonal elements of Y e are small, similar to those of Y d which are responsible for the small quark mixing angles and a correction to the Cabibbo angle of one degree. The quark mixing angles fix the three real parameters to be for example A = 9, B = 7, C = 36 and the down quark couplings in Eq.72. Including the charged lepton Yukawa matrix with these parameters and the same neutrino mass parameters as in Eq.84, the MPT package gives the lepton mixing parameters, Comparing the results in Eq.91 to those in Eq.87, we see that the atmospheric angle has increased by about 3 • to become maximal due to the (2, 3) element in the charged lepton Yukawa matrix, which is enhanced by a Clebsch factor of 3 relative to the same element in the down Yukawa matrix. The reactor angle has decreased slightly, and the CP oscillation phase has increased. With y 0 b taken to be positive instead of negative, and all the other parameters unchanged, we find the results below, The main effect of the sign of y 0 b is on the atmospheric and reactor angles. Modified Georgi-Jarlskog relations Since the charged lepton masses are known with much higher precision than the down type quark masses, the down Yukawa couplings in practice will be predicted from inputting the charged lepton masses in order to accurately fix y 0 d , y 0 s , y 0 b . Comparing Y e in Eq.90 to Y d in Eq.70, we find that we do not get exactly the GJ relations in Eq.37 due to the off-diagonal elements which also involve Clebsch factors. Numerically we find that, for y 0 b negative and the other parameters as above, the Yukawa eigenvalues at the GUT scale are approximately related as, while for y 0 b positive we find, y e = y d 3.0 , y µ = 2.7y s , y τ = 1.05y b . These may be compared to the phenomenological relation [3], For example for y 0 b negative we find the RHS to be 7.3 which differs by more than 4 sigma. In order to bring this relation into better agreement with experiment we would need to increase this ratio, for example by increasing the muon Yukawa eignenvalue compared to the strange quark Yukawa eigenvalue. One way to do this is to introduce a flavon φ d15 2 with the same charges as φ d 2 but in the adjoint 15 of SU (4) C . The middle diagram in Fig.3 involving φ d15 2 involves a Clebsch factor of +9 as compared to the factor of -3 with φ d 2 [35]. Below the PS the colour singlet component of φ d15 2 mixes with φ d 2 , to yield a light flavon combination, Hence middle diagram in Fig.3 involving φ d 2 implies the relation, y 0 For example by suitable choice of the mixing angle γ we can arrange y 0 µ = 4.5y 0 s , By comparing Y e in Eq.98 to Y d in Eq.70, we find the modified GJ relations, and hence, which reproduces the central value in Eq.95. In the above estimate we have assumed A = 9, B = 7, C = 36 and the other couplings in Eq.72. Using the same neutrino mass parameters as in Eq.84, the MPT package gives the same lepton mixing parameters as for the GJ form in Eq.91, to very good accuracy. Numerical results for neutrino masses and lepton mixing In our numerical results we shall use the charged lepton Yukawa matrix in Eq.98, together with the neutrino mass matrix in Eq.80, as summarised below, As discussed previously, the lepton mixing depends on predominantly on m ν which involves the three real mass parameters m a , m b , m c , which are effectively fixed by the neutrino masses. However there are small corrections coming from Y e , which involves the real parameters A, B, C which determine the quark mixing angles and the real Yukawa couplings y 0 d , y 0 s , y 0 b which were previously determined from the down-type quark masses. As discussed previously (c.f. Eqs.87, 91, 92) the effect on lepton mixing depends on the sign of y 0 b where the negative sign pushes up the atmospheric angle towards maximal, while also decreasing the reactor angle, while the positive sign has the opposite effect. Here we shall show results for the negative sign of y 0 b , as in Eq.72. We shall also use the same real parameters A = 9, B = 7, C = 36 which gave a good fit to the quark mixing angles and CP phase in Eq.75. Since lepton mixing depends mainly on the three real mass parameters m a , m b and m c which also determine the neutrino masses, we shall show results as a function of the neutrino mass parameters. Here we shall restrict ourselves to showing results where we keep the parameters appearing in Y e fixed at Fig.6, with the colour coding and line styles as before. the above "benchmark" values, and vary only m a , m b and m c . The parameter m a is mainly responsible for the atmospheric neutrino mass and hence ∆m 2 31 , while m b is mainly responsible for the solar neutrino mass and hence ∆m 2 21 , with m c being mainly responsible for the lightest neutrino mass m 1 , which is zero for m c = 0. Once the parameters m a and m b are chosen to fix ∆m 2 31 and ∆m 2 21 for m c = 0 , then all neutrino parameters are predicted as a function of m c and hence m 1 , as described below. Using the Mixing Parameter Tools package [38], in Fig.6 we show the neutrino mass squared differences as a function of the lightest physical neutrino mass m 1 , corresponding to varying m c for various fixed values of m a , m b as given in the figure caption. Note that ∆m 2 21 actually increases with m 1 . This is because, with fixed m a and m b , switching on m c also increases m 2 . Since m 2 2 increases linearly with m c , after expanding, this has a more significant effect on ∆m 2 21 than the quadratic increase of m 2 1 , in the region of small m c . In Fig.7 we show the resulting model predictions for the lepton mixing angles and CP oscillation phase. In all the plots (blue, red, green) coloured lines correspond to (high, central, low) values of ∆m 2 31 , while the (dashed, solid, dotted) styles correspond to (high, central, low) values of ∆m 2 21 . Note that the presently 3σ allowed range of mass squared parameters are [8,9,10]: ∆m 2 31 = (2.25 − 2.65).10 −3 eV 2 , ∆m 2 21 = (7.0 − 8.0).10 −5 eV 2 , and our choice of parameters covers most of these ranges. Thus the red solid curve corresponds to central values of both ∆m 2 31 and ∆m 2 21 for low values of m 1 , while the other curves reflect the uncertainty in the PMNS predictions due to the present precision in the neutrino mass squared differences. Using the Mixing Parameter Tools package [38], in Fig.7 we show the PMNS predictions of the model, resulting from Eqs.101,102, plotted as a function of the lightest neutrino mass m 1 . From Fig.7, the PMNS parameters are predicted to be in the following ranges: These predictions should be compared to the presently 3σ allowed ranges [10]: and the best fit values for a normal hierarchy with 1σ errors [7]: The solar angle prediction is 34 • > ∼ θ l 12 > ∼ 31 • , for the lightest neutrino mass in the range 0 < ∼ m 1 < ∼ 0.5 meV, corresponding to a normal neutrino mass hierarchy. Since the solar angle is very insensitive to ∆m 2 31 and ∆m 2 21 values, and decreases as m 1 increases, an accurate determination of the solar angle will accurately determine m 1 in this model. The model also predicts a reactor angle θ l 13 = 9 • ± 0.5 • , close to its best fit value, with a significant dependence on ∆m 2 31 and ∆m 2 21 . A striking prediction of the model is the atmospheric angle which is predicted to be close to maximal to within about one degree for nearly all allowed ∆m 2 31 and ∆m 2 21 . The bulk of the parameter space for low m 1 predicts in fact θ l 23 = 45 • ± 0.5 • . It is worth noting that the most recent fit [7] is quite compatible with maximal atmospheric mixing to within 1σ for the case of a normal mass squared ordering, when the latest T2K disappearance data is included. The model also predicts accurately the CP phase with the bulk of the parameter space around δ l = 260 • ± 5 • , compatible with the best fit value, although the latter has a much larger error. In general one can expect corrections coming from renormalisation group (RG) running [39,40] as well as canonical normalisation corrections [41]. For a SUSY GUT with light sequential dominance, as in the present model, the RG corrections for high tan β ∼ 50 have been shown to be [40]: ∆θ l 23 ∼ +1 • , ∆θ l 12 ∼ +0.4 • , ∆θ l 13 ∼ −0.1 • , where the positive sign means that the value increases in running from the GUT scale to low energy, while for low tan β < ∼ 10 the RG corrections are negligible compared to the range of the predictions. In particular the effect of right-handed neutrino thresholds [39] is expected to be negligible in this model since the heaviest right-handed neutrino mass is close to the GUT scale, while the lighter right-handed neutrinos have very small Yukawa couplings given by a ∼ 2.10 −5 and b ∼ 10 −3 from Eq.32. We emphasise that, since the parameters in Y e in Eq.102 are fixed from the quark sector, and the light neutrino masses are determined by three real parameters m a , m b , m c in Eq.101, the entire PMNS matrix containing 3 mixing angles and 3 CP phases emerges as a prediction of the model, although 2 of these CP phases will be difficult to measure for a normal neutrino mass hierarchy, so we have not plotted their predictions. The model may be tested most readily by its prediction of maximal atmospheric mixing and a normal neutrino mass hierarchy. It would be interesting to perform a χ 2 analysis of the quark and lepton masses and mixing angles predicted by the model, but that is beyond the scope of the present paper. In the present model |m ee | is predicted to be always very small and unobservable in the foreseeable future. For example, for the parameters in Eq.84, 85 and 91, we find, The sum of neutrino masses is relevant for cosmology, since it contributes to hot dark matter, leading to a constraint on its value and eventually a measurement. This is defined by, Due to the rather strong normal hierarchy, this value is dominated by the value of m 3 , which is controlled by the parameter m a in the neutrino mass matrix in Eq.101. In Fig.9 we show the neutrinoless double beta decay parameter |m ee | (left panel) and the sum of neutrino masses Σm i (right panel) as predicted by the model, using the same parameter sets and colour coding as for the other plots. Note that for |m ee | (left panel) the three colours corresponding to different values of m a lie accurately on top of each other. The three dashed curves predict |m ee | ≈ 2.15 meV, the three solid curves predict |m ee | ≈ 2.10 meV and the three dotted curves predict |m ee | ≈ 2.05 meV, corresponding to the three different values of m b = 2.15, 2.10, 2.05. This can be understood from the neutrino mass matrix in Eq.101, since |m ee | = |m ν 11 | = m b , with the charged lepton matrix in Eq.98 providing only very small corrections to this result. The fact that Eq.106 was used to calculate the results and agrees very accurately with the expectation |m ee | = |m ν 11 | = m b provides a highly non-trivial check on our calculation of PMNS parameters and neutrino masses, and gives confidence to all our results. Note that |m ee |, being equal to m b , is approximately fixed by ∆m 2 21 in Fig.6. Since |m ee | is predicted to be too small to measure in the foreseeable future, an observation of neutrinoless double beta decay could exclude the model. Similar comments apply to a cosmological observation of Σm i . HO corrections to vacuum alignment The triplet vacuum alignments are achieved by renormalisable superpotentials, as discussed in [24]. Since the messenger scale associated with any non-renormalisable corrections to vacuum alignment is unconstrained by the model, it is possible that any such terms may be highly suppressed. In the present analysis we shall therefore ignore any HO corrections to the vacuum alignments in Eqs.12,13. HO corrections to Yukawa operators Let us now consider HO corrections to the operators in Eqs. 22,23,24, consisting of extra insertions of φ, leading to effective operators of the type, for n > 1. For example, Σu are both singlets of Z 5 , so either of these ratios may in principle be inserted into any of the LO operators in Eqs.22,23 24. However in practice, which HO insertions are allowed will depend on the details of the messenger sector. In order for an effective operator to be allowed, it is necessary that that the messenger diagram responsible for it can be drawn, and whether this is possible or not will depend on the choice of charges of the messenger fields X F and X F under all the symmetries. In order to allow such HO operators as in Eq.109, for n > 1, at least one of the messenger fields X F and X F would have to be a triplet of A 4 in order to permit the coupling X F φX F where φ is a triplet, as is clear from Fig.10 (left panel). Such triplet messenger fields X F and X F are not required in order to construct the LO operators and must be introduced for the sole purpose of allowing the HO operators of this kind. Moreover, such triplet messenger fields would be dangerous since they may allow operators of the kind in Eq.109 for n = 1 involving the Higgs triplet h 3 which could contribute to up and charm quark masses for example. For these reasons we have chosen not to introduce any messenger fields X F and X F which are triplets of A 4 , thereby forbidding HO operators of the type shown in Eq.109 for n ≥ 2 involving any Higgs fields or involving the A 4 triplet Higgs h 3 for n ≥ 1. The couplings in Eqs. 19,20,21 can also lead to HO operators of the generic kind, after integrating out the messengers, as shown in Fig.10 (right panel). where n ≥ 1. At the order n = 1, only a single operator of this kind is generated, which gives a correction in the (1,1) entry of Y u and hence a contribution to the up quark Yukawa coupling, Figure 10: Some possible higher order diagrams. The left panel shows a generic diagram involving triplet fermion messengers, which if present, would lead to effective higher order operators as in Eq.109. In our model we assume such triplet messengers to be absent which prevents diagrams with more than one φ field. The right panel shows a generic diagram responsible for the effective higher order operators as in Eq.110. where we have used y 0 d given in Eq.34. The correction is small if u Σ d d Σ u . HO corrections to Majorana operators The relevant bilinear charges in the Majorana sector are The messengers which transform under A 4 × Z 5 as X ξ i ∼ (1, α i ) can couple to the Majoron field ξ ∼ (1, α 4 ) leading to the LO operators in Eq.25 (dropping H c and Λ), Since each insertion of ξ carries a suppression factor of ξ /Λ ∼ 10 −5 , HO operators involving more powers of ξ, such as F c 1 F c 2 ξ 4 , are negligible. Conclusions In this paper we have proposed a rather elegant theory of flavour based on the Pati-Salam gauge group combined with A 4 × Z 5 family symmetry which provides an excellent description of quark and lepton masses, mixing and CP violation. Pati-Salam unification relates quark and lepton Yukawa matrices and in particular predicts Y u = Y ν , leading to Dirac neutrino masses being equal to up, charm and top masses. The see-saw mechanism involves very hierarchical right-handed Majorana neutrino masses with sequential dominance. The A 4 family symmetry determines the structure of Yukawa matrices via CSD4 vacuum alignment, with the three columns of Y u = Y ν being proportional to (0, 1, 1) T , (1, 4, 2) T and (0, 0, 1) T , respectively, where each column has a multiplicative phase determined by Z 5 breaking, which controls CP violation in both the quark and lepton sectors. The other Yukawa matrices Y d ∼ Y e are both approximately diagonal, with charged lepton masses related to down quark masses by modified GJ relations, and containing small off-diagonal elements responsible for the small quark mixing angles θ q 13 and θ q 23 . The model hence predicts the Cabibbo angle θ C ≈ 1/4, up to such small angle corrections. The main limitation of the model is that it does not predict the charged fermion masses. However the third family masses are naturally larger since they arise at renormalisable order, while the hierarchy between first and second family masses can be understood to originate from hierarchies between flavon VEVs. Although the model does not predict the small quark mixing angles, it does offer a qualitative understanding of both CP violation and the Cabibbo angle θ C ≈ 1/4, which, as discussed above, is closely related to the lepton mixing angles via the CSD4 vacuum alignment. Moreover, the model contains 6 fewer parameters in the flavour sector than the 22 parameters of the SM, and hence predicts the entire PMNS matrix, as is clear from Eqs.101,102 where all the parameters which appear there are fixed by fermion (including neutrino) masses and small quark mixing angles. Hence the model predicts the entire PMNS lepton mixing matrix with no free parameters, including the three lepton mixing angles and the three leptonic CP phases with negligible theoretical error from HO corrections. The resulting PMNS matrix turns out to have an approximate TBC form as regards maximal atmospheric mixing and the reactor angle θ l 13 ≈ 9 • , although the solar angle deviates somewhat from its tri-maximal value, corresponding to a negative deviation parameter s ∼ −0.03 to −0.1, where sin θ l 12 = (1 + s)/ √ 3 [42]. The predictions of a normal neutrino mass hierarchy and maximal atmospheric angle will both be either confirmed or excluded over the next few years by current or near future neutrino experiments such as SuperKamiokande, T2K, NOνA and PINGU [43]. The Daya Bay II reactor upgrade, including the short baseline experiment JUNO [44], will also test the normal neutrino mass hierarchy and measure the reactor and solar angles to higher accuracy, enabling precision tests of the predictions θ l 13 = 9 • ± 0.5 • and 34 • > ∼ θ l 12 > ∼ 31 • , for the lightest neutrino mass in the range 0 < ∼ m 1 < ∼ 0.5 meV. With such a mass range, neutrinoless double beta decay will not be observable in the foreseeable future. In the longer term, the superbeam proposals [45] would measure the atmospheric mixing angle to high accuracy, confronting the prediction θ l 23 = 45 • ±0.5 • , and ultimately testing the prediction of the leptonic CP violating oscillation phase δ l = 260 • ± 5 • . A A 4 A 4 has four irreducible representations, three singlets 1, 1 and 1 and one triplet 3. The products of singlets are: The generators of the A 4 group, can be written as S and T with S 2 = T 3 = (ST ) 3 = I. We work in the Ma-Rajasakaran basis [16] where the triplet generators are, In this basis one has the following Clebsch rules for the multiplication of two triplets, where ω 3 = 1, a = (a 1 , a 2 , a 3 ) and b = (b 1 , b 2 , b 3 ). Under a CP transformation in this basis we require [31], so that where h 1 and h 2 form two SU (2) L doublets with U (1) T 3R charges of −1/2 and 1/2. Henceforth it is convenient to use a slightly different notation as follows. We label each of the Higgs bi-doublets as h a (2, 2) and, below the SU (2) R breaking scale, each of them will split into two Higgs doublets, denoted as h ± a (2, ±1/2) labelled by their U (1) T 3R charges of ±1/2, rather than their electric charges as shown in Eq.120. Thus the five bi-doublets above will yield eight Higgs doublets from h ± u , h ± d and the colour singlet parts of h d± 15 , h u± 15 , plus additional colour triplet and octet Higgs doublets from h d± 15 , h u± 15 , together with the six Higgs doublets from h ± 3 . We shall arrange for nearly all of these Higgs doublets to have superheavy masses near the GUT scale, leaving only the two light Higgs doublets H u and H d , as follows. The h 3 multiplet, which will be mainly responsible for the third family Yukawa couplings, is a triplet of A 4 . We introduce a triplet φ 3 ∼ 3 which is a PS and Z 5 singlet and couples as φ 3 h 3 h 3 . If φ 3 develops a VEV in the third direction 5 , φ 3 ∼ (0, 0, V 3 ), then, using the Clebsch rules in Eq.119, this gives a large mass to the first two A 4 components of h 3 while leaving the third component massless. Introducing a TeV scale mass term µh 3 h 3 will give a light mass to the third component of h 3 . The Higgs bidoublets in the third A 4 component of h 3 will mix with other Higgs bi-doublets as discussed below and two linear combinations of the mixed states, H u and H d , will remain light, allowing the renormalisable third family Yukawa couplings. The operators involving the Higgs fields h u , h d , h d 15 , h u 15 , collectively denoted as h a , have the general form, where S ab are Pati-Salam singlet fields which develop VEVs somewhat higher than the Pati-Salam breaking scale. When H c gets a VEV in its right-handed neutrino component, it will project out the T 3R = +1/2 component of h a , which we write as h + a . Similarly when H c gets a VEV in its right-handed neutrino component, it will project out the T 3R = −1/2 component of h b , which we write as h − b . The diagrams responsible for generating the operators of the form in Eq.121 are shown in Fig.11. These diagrams should be considered as Higgsino doublet mixing diagrams. The Higgsino messenger fields which couple to (h a H c ) are denoted as X Ha and those which couple to (H c h b ) are denoted as X H b , where the messenger masses are generated by the couplings X Ha S ab X H b when S ab develops its VEV, leading to the effective operators in Eq.121. The choice of singlets S 11 , S 33 , S 24 , S 34 with appropriate Z 5 and A 4 charges, lead to the following particular operators of the general form of Eq.121: Note that S ab has the same A 4 × Z 5 charges as S ba . In addition we require the following three operators, involving the third component of h 3 , given by h 3 .φ 3 , Since the matrix of charges is symmetric (since S ab has the same A 4 × Z 5 charges as S ba ) the operators above must be given by a particular messenger sector which forbids similar operators with H c and H c interchanged. (125)
2014-09-04T07:37:00.000Z
2014-06-26T00:00:00.000
{ "year": 2014, "sha1": "7bd1a79566f38a3d5f86fd31cdffc2ee9c5d30b9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP08(2014)130.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "7bd1a79566f38a3d5f86fd31cdffc2ee9c5d30b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270390408
pes2o/s2orc
v3-fos-license
Performance of contemporary cardiovascular risk stratification scores in Brazil: an evaluation in the ELSA-Brasil study Aims Despite notable population differences in high-income and low- and middle-income countries (LMICs), national guidelines in LMICs often recommend using US-based cardiovascular disease (CVD) risk scores for treatment decisions. We examined the performance of widely used international CVD risk scores within the largest Brazilian community-based cohort study (Brazilian Longitudinal Study of Adult Health, ELSA-Brasil). Methods All adults 40–75 years from ELSA-Brasil (2008–2013) without prior CVD who were followed for incident, adjudicated CVD events (fatal and non-fatal MI, stroke, or coronary heart disease death). We evaluated 5 scores—Framingham General Risk (FGR), Pooled Cohort Equations (PCEs), WHO CVD score, Globorisk-LAC and the Systematic Coronary Risk Evaluation 2 score (SCORE-2). We assessed their discrimination using the area under the receiver operating characteristic curve (AUC) and calibration with predicted-to-observed risk (P/O) ratios—overall and by sex/race groups. Results There were 12 155 individuals (53.0±8.2 years, 55.3% female) who suffered 149 incident CVD events. All scores had a model AUC>0.7 overall and for most age/sex groups, except for white women, where AUC was <0.6 for all scores, with higher overestimation in this subgroup. All risk scores overestimated CVD risk with 32%–170% overestimation across scores. PCE and FGR had the highest overestimation (P/O ratio: 2.74 (95% CI 2.42 to 3.06)) and 2.61 (95% CI 1.79 to 3.43)) and the recalibrated WHO score had the best calibration (P/O ratio: 1.32 (95% CI 1.12 to 1.48)). Conclusion In a large prospective cohort from Brazil, we found that widely accepted CVD risk scores overestimate risk by over twofold, and have poor risk discrimination particularly among Brazilian women. Our work highlights the value of risk stratification strategies tailored to the unique populations and risks of LMICs. INTRODUCTION Despite the increasing focus on personalised cardiovascular prevention based on risk assessment, accurately defining the risk of cardiovascular disease (CVD) remains a challenge. 1 2This challenge is magnified in low-and middle-income countries (LMICs), which account for over 75% of global CVD deaths but lack sufficient high-quality data to inform effective risk assessment strategies. 3 4his is particularly important as population demographics and lifestyle choices may result in differing levels of risk among LMIC populations compared with those in the USA and Europe.However, risk stratification strategies used in LMICs, such as Brazil, are predominantly derived and validated in the USA and Europe. 5 6n recent years, novel cardiovascular risk scores targeted for LMICs have emerged, WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Cardiovascular disease (CVD) risk scores developed in high-income Western countries are being adopted in national guidelines in low-income and middle-income countries (LMICs) without a systematic assessment of their performance in these populations. WHAT THIS STUDY ADDS ⇒ In a large, well-characterised cohort study from Brazil, we identify a high overestimation of risk by commonly used CVD risk scores, exceeding those seen in validation studies performed in other Western nations.⇒ Current CVD risk scores, including those recalibrated for LMICs, fail to accurately capture risk in a Brazilian population and perform poorly among Brazilian women. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ The study highlights the critical need for new calibration strategies and risk assessment tools to inform policy decisions regarding CVD prevention and resource allocation in Brazil and similar LMIC settings. aiming to enhance risk prediction in diverse populations. 7 8Brazil has considerable racial and demographic diversity, characterised by a unique interplay of dietary and lifestyle patterns as well as environmental exposures, shaping the prevalence of risk factors and contributing to a distinct susceptibility to cardiovascular-related outcomes. 9 10These differences emphasise the critical need for targeted risk stratification algorithms in tailoring preventive strategies, ensuring efficient resource allocation and addressing the specific needs of a population facing cardiovascular risk factors in a developing country. 11n this study, we leverage the largest and the most racially diverse cohort in Brazil-the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil) to examine the performance of contemporary cardiovascular risk scores, including Framingham General Risk (FGR), 12 Pooled Cohort Equation (PCE), 13 WHO CVD risk score, 7 Globorisk-LAC 8 and European Society of Cardiology's Systematic Coronary Risk Evaluation 2 (SCORE-2) 14 in predicting incident CVD events. Data source The ELSA-Brasil is a large-scale multicentre and multiracial cohort study aimed at investigating risk factors and determinants of chronic diseases, especially CVD, in the Brazilian population.The study began in 2008 and included 15 105 public servants from higher education and research institutes, 35-75 years old, from 6 state capitals in Brazil.Follow-up visits were conducted every 3-4 years to ascertain exposure status and identify changes in baseline subclinical and clinical parameters.In addition, Cardiac risk factors and prevention all participants (or their proxy) were interviewed yearly via telephone to obtain information on new diagnoses, hospitalisation and death.Details about the design and cohort profile have been previously published, 15 16 and details on key elements are provided in the following sections.All six investigation centres approved the ELSA-Brasil protocol, and all participants signed an informed consent. Study population To construct a primary prevention cohort suitable for applying current cardiovascular risk scores, we included all ELSA-Brasil participants aged 40-75 years who did not report any previous CVD at baseline, including myocardial infarction (MI), stroke, coronary revascularisation or heart failure.Participants with missing data about race (n=153, 1.24%), CVD prevalence (n=594, 4.26%) or statin use (n=55, 0.44%), and those who did not participate in follow-up visits (n=10, 0.08%) were excluded (figure 1). Cardiovascular risk scores We calculated five risk prediction scores, using six different equations: FGR 12 -currently recommended by Ministry of Health's Brazilian guidelines 17 and by the Brazilian Society of Cardiology in an adapted version, PCE (PCEs from the American College of Cardiology/ American Heart Association), 13 African-American and White-American equations, WHO (WHO CVD risk score) recalibrated for Tropical Americas, 7 Globorisk-LAC recalibrated for Brazil 8 and SCORE-2 recalibrated to low-risk populations, from the European Society of Cardiology. 14he PCEs, WHO, Globorisk-LAC, and SCORE-2 predict 10-year individual risk of coronary heart disease (CHD) death, non-fatal MI and fatal or non-fatal ischaemic stroke.For the FGR, the Framingham Heart Study defines CVD as a composite of CHD (coronary death, MI, coronary insufficiency, and angina), cerebrovascular events (including ischaemic stroke, haemorrhagic stroke, and transient ischaemic attack), peripheral artery disease (intermittent claudication) and heart failure. Events were censored on 31 December 2013, with a median follow-up time of 4.2 years.Employing a method used by previous studies, 18 19 we lowered individual 10-year risk estimates to correspond to their length of follow-up using an exponential survival function to scale the predicted risk, described in further detail in online supplemental file 1. Details about the risk scores can be found in online supplemental table 1. Study outcomes and adjudication of events The study outcomes were aligned with the outcomes predicted by each of the five scores.They spanned major adverse cardiovascular events, such as fatal or non-fatal MI, fatal or non-fatal stroke, and cardiovascular death, and included heart failure and peripheral artery disease for the FGR score.In the ELSA study, the events were identified either by in-person interview or the annual telephone call and then investigated by a designated committee that contacted health providers and requested copies of medical records for all hospitalisations, outpatient diagnoses, and death certificates.More details about the follow-up for events in the ELSA-Brasil can be found in a previous publication. 20After investigation, the cardiovascular events were then adjudicated according to predefined definitions by the independent review of two cardiologists.A third senior cardiologist defined the event in case of disagreement.MI was defined as an increase in cardiac biomarkers (such as troponin) above the 99th percentile of the reference population, with at least one of the following: symptoms of ischaemia, ECG changes indicative of new ischaemia, development of pathological Q waves on the ECG, imaging evidence of new loss of viable myocardium or new regional wall motion abnormality or identification of an intracoronary thrombus by angiography or autopsy.Stroke was defined as a sudden onset of a focal neurological deficit persisting for at least 24 hours, or leading to death, and attributable to a vascular cause.Heart failure was defined by medical diagnosis and specific treatment and/or pulmonary oedema in X-rays, and/or ventricular function on echocardiogram/radionuclide scintigraphy or contrast ventriculography; and peripheral arterial disease was defined based on symptoms, diagnostic procedure or therapeutic intervention.Death due to cardiovascular causes includes deaths caused by CHD, MI, stroke, heart failure and arrhythmias. The classification of underlying causes of death in the ELSA study is based on the guidelines of the Brazilian Ministry of Health, which follows the 10th revision of the International Classification of Diseases.The cause of death is ascertained by death certificates, hospital records and autopsy reports.In cases where the cause of death was unclear or disputed, an expert panel reviewed the available data to confirm the underlying cause of death. Study covariates The risk factors used to calculate risk scores were age, sex, total cholesterol (TC), high-density lipoprotein cholesterol (HDL-c), systolic blood pressure (SBP), smoking status, history of diabetes mellitus (DM), hypertension treatment (PCE and FGR) and race (PCE).For PCE, the model specifications recommend that all non-black individuals have their risk calculated according to the equation for whites.In our study, we used the African-American and the White-American equations to measure the risk for the black and 'Pardo' populations.Race and smoking status were self-reported.Race was categorised as black, 'Pardo' (mixed), white, or other (Asian and Indigenous were combined due to the low number of events in each separate population).The ELSA-Brasil routines, organisation of clinical tests and definition of DM, SBP, TC, HDL-c and body mass index can be found in previous publications 21 and in online supplemental material (p.03).All participants were requested to bring to the investigation centre all continuous medication they were taking during the 2 weeks preceding the interview.To be considered under antihypertensive or statin medication, the participant should declare taking at least one medication from these classes. Statistical analysis Categorical variables were defined as counts and percentages, and differences between racial groups were assessed by the χ 2 test.Continuous variables were defined by median and IQR, and differences between racial groups were tested by the analysis of variance. We evaluate the performance of risk scores across both model discrimination and calibration, as these models serve as out-of-box tools used directly in each candidate population.We used the c-statistic reflecting the area under the receiver operating characteristic curve (AUC) to assess discrimination.We compared mean 4-year predicted CVD risk to observed 4-year cumulative CVD events incidence across baseline deciles of risk estimates and by risk categories. We assessed calibration by predicted-to-observed risk (P/O) ratios and calculated the Grønnesby-Borgan goodness-of-fit test.A P/O ratio >1 indicated an overestimation of risk, a P/O ratio <1 underestimation and a P/O ratio=1 perfect calibration.All analyses were performed for the total population and then stratified by sex/race groups (black/'Padro' men; white men; black/'Pardo' women; white women). As a sensitivity analysis, we limited the population to participants with clinical criteria consistent with guideline recommendations for using CVD risk scores to guide statin therapy (not taking statins at baseline, not having DM, and with an LDL-c between 70 and 189 mg/dL).We also performed an analysis stratified by education (college/high school/middle school), a proxy for socioeconomic status. Patient and public involvement The participants were not involved in the planning of the study or in the dissemination of the study results. RESULTS We included 12 155 individuals with a mean (SD) age of 53.0 (8.2) years, including 6722 (55.3%) females, and 6328 (52.1%) individuals self-reported as white.Antihypertensive medications and statins were being used by 27.3% and 11.4%, of the individuals at baseline, respectively.Baseline characteristics and risk factors varied according to race categories except for TC values (table 1). Over a median (IQR) follow-up of 4.2 (3.7-4.5)years, 149 (1.2%) fatal and non-fatal cardiovascular events were ) fatal CVD events.The cumulative risk of CVD events increased linearly during the 5-year follow-up period (figure 2). Risk scores accuracy The discrimination of the 5 scores within the overall ELSA-Brasil population was comparable to their performance in the cohorts where they were originally developed, with an AUC varying between 0.75 (95% CI 0.71 to 0.80) and 0.77 (95% CI 0.72 to 0.81) (table 2 and figure 3).However, in the analysis stratified by sex and race, all tested scores had poor discrimination for women self-reported as white (AUC FGR: 0. 2). Discordance between observed and predicted risk was found for both men and women throughout the risk continuum, with the highest gap among those with a predicted risk ≥10% (figure 4).We observed higher overestimation in whites compared with blacks/'Pardos' and all 5 scores showed the worst calibration results among white women (online supplemental figure 4). The use of PCE African American equation for risk estimation in Brazilian Blacks and 'Pardos' showed worse calibration compared with the use of the white-American Cardiac risk factors and prevention equation in this population (men: P/O ratio 3.97 95% CI 2.94 to 5.00 vs 2.38 95% CI 1.37 to 3.39 and women: P/O ratio 3.70 95% CI 2.74 to 4.66 vs 2.81 95% CI 1.68 to 3.94, respectively) (online supplemental figure 5). The sensitivity analysis, limited to ELSA participants that met the criteria for the use of CVD risk score to guide statin therapy (not taking statins at baseline with an LDL-c between 70 and 189 mg/dL and without DM), demonstrated similar discrimination results (AUC between 0.73, 95% CI 0.63 to 0.78 and 0.78, 95% CI 0.70 to 0.82).There was continued overestimation in this population, apart from SCORE-2, which underestimated the risk for those with risk ≥5% and for black/'Pardo' women.All results for the sensitivity analysis are summarised in online supplemental table 4. The stratified analysis by education showed similar discrimination results but better calibration among individuals with lower education (middle school) compared with those with higher education backgrounds (college) for the FGR, PCE and Globorisk-LAC.In the population with lower educational attainment, WHO and SCORE-2 underestimated the CVD risk (online supplemental table 5). DISCUSSION To our knowledge, this is the first large cohort study to assess the performance of CVD risk scores in Brazil and the first to test calibration and discrimination of widely used CVD predictive scores in a South American country.In the large prospective ELSA-Brasil, while current scores had cardiovascular risk discrimination consistent with the development cohorts, the models performed poorly for many key demographic groups.Specifically, models performed poorly for white women, representing nearly half of all women in ELSA Brasil.Moreover, despite their purported use as out-of-box calculators, all risk scores overestimated the CVD risk nearly twofold throughout the risk continuum, with WHO score recalibrated for Tropical Americas closest aligning between predicted risk and observed events.These differences persisted in the subpopulation where CVD risk scores are used to guide statin therapy. Estimating the absolute cardiovascular risk is the foundation of national guidelines for CVD prevention, defining blood pressure targets and optimal utilisation of cholesterol-lowering medication. 17 24While many studies assessing the performance of different CVD risk scores have suggested risk overestimation, [25][26][27] the degree of overprediction in ELSA-Brasil is substantially higher.Brazil boasts significant racial and demographic diversity, setting its population apart from the typically less diverse, high-income cohorts used to derive cardiovascular risk scores. 9 28Genetic variations, dietary and lifestyle habits, and differences in environmental exposures contribute to variations in susceptibility to cardiovascular events. 29 30he downstream cardiovascular outcomes might also be affected by the ubiquitous public health access granted by Brazil's universal health system. 31Despite challenges in quality metrics and coordination between levels of care, the system has achieved significant gains over the past 30 years enhancing coverage and access to healthcare services and consequently yielding improved health outcomes overall. 31 32The disparity in the relevance of risk factors between the cohorts used to formulate these scores and contemporary populations in developing countries may underlie the inadequate estimation of the impact of individual risk factors incorporated into the models. 33 34Moreover, even recently developed scores are rooted in older cohorts that exhibit fundamental differences from the Brazilian population, 8 35 differences that can limit adequate risk calibration and hinder the effectiveness of recalibration. 18 36e observed a higher overestimation of risk when applying the PCE African American equation to black and 'Pardo' Brazilians compared with the use of the white-American equation in the same population.A recent study comparing estimates for 10-year CVD risk in Black and White individuals with identical risk profiles showed that the PCE might yield significantly different CVD risk estimates for these two racial groups.They examined these aspects through computer simulations and in two distinct community-based samples. 37Similar Yadlowski et al showed that PCE had risk estimates varying from 80% lower to more than 500% higher for black adults compared with white adults, with otherwise identical risk factors. 38These findings hold significant clinical importance, particularly in a country like Brazil where there is a large mixed-race population.Discrepancies in risk assessment based on race could potentially lead to inaccurate clinical recommendations for CVD prevention. 39fforts to address systemic racism in medicine have led to a reevaluation of race modifiers in medical algorithms, such as those for estimating glomerular filtration rate (eGFR), 40 with studies indicating that racial disparities in eGFR prevalence may be predominantly attributed to health inequalities, discouraging the application of race corrections. 41ur study has several key strengths that enhance the reliability and significance of our results.First, the data from this investigation present new findings from the Brazilian population, highlighting the relevance of our research in an understudied population subset.Second, the study benefits from a large sample size, enabling robust statistical analyses and reliable assessments of risk scores' performance.The events collection and adjudication process in the ELSA study is rigorous and includes successfully obtaining medical records and classifying 87% of hospital and outpatient reports of CVD events, and achieving more than 90% of follow-up telephone interviews of living participants.Finally, the results were robust in sensitivity analyses that explicitly focused on populations without any risk modification with lipidlowering therapy at baseline, suggesting that the patterns observed are not driven by differences in baseline risk management. It is important to acknowledge some limitations.One notable limitation is the shorter follow-up duration, with a median follow-up of 4 years.The shorter follow-up may have influenced our ability to capture long-term changes in risk profiles, considering the disproportionate risk increase during middle age (50-70 years), which is the mean age of our population.A more extended follow-up would have provided a more comprehensive understanding of risk trends.However, we observed a linear cumulative risk of CVD during the 5-year follow-up, which supports the extrapolation of our results to 10-year risk.Another key consideration is that the ELSA-Brasil cohort comprises individuals enrolled from the community and does not represent a high-risk group.However, the observed event rates in ELSA-Brasil over the follow-up period are consistent with those in similar studies. 25espite focusing exclusively on individuals in the primary prevention of CVD, the cohort includes a significant proportion of higher-risk individuals, with 14% of the population having a CVD risk greater than 10% according to PCE.The ELSA-Brasil study comprises adults from six diverse regions of Brazil, representing a spectrum of socioeconomic and educational backgrounds. Notably, 12.5% of Brazilian adults are government employees, representing a substantial proportion of the population.These employees span a broad socioeconomic spectrum and are not limited to professional staff at these institutions.This diversity is reflected in the educational and socioeconomic distribution captured in ELSA-Brasil, where 19% were manual workers, 46% were in a middle socio-occupational category, and 38% were in a higher socio-occupational category, representing managerial or professional occupations. 42Similarly, 12% were in the low-income category, and nearly 40% were in the middle-income category.Moreover, less than half had a university degree, with 34% with a high school education and 10% with an elementary school education or less.National assessments indicate that the lower and middle classes represent 20% and 65% of the population, respectively. 42Therefore, while like other cohort studies, ELSA-Brasil does not include a fully representative sample of Brazilian adults, it includes a wide range of socioeconomic, educational and occupational classes.To further address the potential limitations in generalisability, we identified persistent overprediction of risk across educational categories, arguing against differences between individuals in our study cohort and the general population. In conclusion, in this large prospective cohort study from Brazil, we found that widely accepted CVD risk scores overestimate risk by over twofold and particularly do not adequately define risk for Brazilian women and other demographic groups.The recalibrated WHO score for Tropical Americas was best calibrated but still had performance issues.Our study highlights the value of risk stratification strategies tailored to the unique populations and risks of LMICs. Figure 1 Figure 1 Flow chart of the study population.Brazilian Longitudinal Study of Adult Health (ELSA-Brasil, 2008).CVD, cardiovascular disease. Figure 2 Figure 2 Cumulative risk of CVD events during 5-year follow-up.ELSA-Brasil (2008-2013) N=12 155.(N represents the cumulative number of events observed during each follow-up period for calculating the annual cumulative risk of CVD events among all eligible adults 40-75 years of age in the ELSA-Brasil (2008-2013)).CVD, cardiovascular disease; ELSA-Brasil, Brazilian Longitudinal Study of Adult Health. Table 2 Discrimination and calibration of the FGR, PCE, WHO, Globorisk-LAC, and SCORE-2 for the total study population and for sex and race groups.
2024-06-13T06:16:08.544Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "16c9e19cd3dce62620e01757bbaf60a9f4afb22b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1136/openhrt-2024-002762", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "883b2d5f650aa98d72353c97c0bb8258a11ad7b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266925246
pes2o/s2orc
v3-fos-license
Ultra-high-speed dynamics of acoustic droplet vaporization in soft biomaterials: Effects of viscoelasticity, frequency, and bulk boiling point Graphical abstract Introduction Ultrasound-responsive droplets, termed phase-shift droplets, have advanced the field of biomedical ultrasound.Phase-shift droplets consist of a perfluorocarbon liquid core stabilized by a shell.Perfluorocarbons are fluorinated compounds widely used in the past as oxygen carriers due to their inertness and high oxygen solubility [1].Phase-shift droplets possess distinct advantages compared with conventional contrast microbubbles such as improved stability and longer circulation half-life in vivo [2,3].These liquid droplets maintain their liquid state at physiologic temperature due to a combination of increased Laplace pressure [4] and inhibition of heterogeneous nucleation [5].An ultrasound pulse triggers activation and phase-transition into a gaseous statea process termed acoustic droplet vaporization (ADV) [6].Since ADV enables a highly localized and non-invasive generation of bubbles in situ in a spatiotemporally controlled manner, it has been extensively researched in ultrasound-based diagnostic [7][8][9] and therapeutic applications [10][11][12][13]. The underlying complex physics and the critical parameters affecting ADV have been the subject of several studies.A deep understanding of the dynamic response of phase-shift droplets to ultrasound is critical to ensure robust and repeatable outcomes for any ADV-assisted application.Both experimental and theoretical approaches have been undertaken to study the phase-transition and ADV-generated bubble dynamics under varying ultrasound conditions.Although great progress has been made on developing mathematical models to predict various stages of ADV, from phase-transition [14][15][16] to bubble growth [17][18][19], realtime experimental assessment of ADV dynamics remain limited due to the transient and microscopic nature of the phenomenon.High-speed, brightfield microscopy has enabled study of ADV at high temporal resolutions.Formation of vapor nucleus in the initial phase-transition at the nanosecond timescale, interaction of the generated bubbles with ultrasound, and the growth stage microseconds post ultrasound have been recorded [20].High-speed imaging, at a nanosecond timescale, demonstrated that the initial vapor nucleus was generated within the droplet, through homogeneous nucleation, and it was along the direction of ultrasound propagation [21][22][23].Further studies showed that the initial nucleation site depended on the interplay between the size of the droplet and the wavelength of the incoming wave [22].The connection between the initial nucleation site and the pressure and phase information of the incident ultrasound pulse indicated that the nucleation took place during the peak rarefactional half cycle of the ultrasound when the liquid experienced tension [23].More recent high-speed studies have focused on the dynamics of the generated ADV-bubbles since their response to ultrasound is paramount to the biomedical applications.For example, the expansion ratio, defined as the maximum bubble radius with respect to the initial radius of the droplet, expansion velocity, and final bubble size corelated inversely with perfluorocarbon bulk boiling point [17].Expansions as high as ~ 8-fold were reported for droplets with bulk boiling points less than 29 • C. Microseconds after ultrasound was turned off, the ADV-generated bubbles were shown to oscillate in a decaying manner, due to unforced (i.e., not driven by ultrasound) radial oscillation, and settled to a final resting bubble size [24,25].A detailed review of high-speed microscopy studies of ADV dynamics is provided in a prior review article [20]. Prior research on ultra-fast dynamics of ADV has been exclusively conducted in Newtonian fluids.Yet, biological liquids and soft tissues exhibit non-Newtonian behavior [26], necessitating investigation of ADV dynamics and relevant parameters in a viscoelastic medium.Additionally, a full-scale characterization of ADV dynamics across a broad range of time scales, from nanoseconds to minutes, is required to fully capture the dynamics and mechanical interactions involved.Acoustic emissions during the microsecond ultrasound pulses, which are significantly affected by the radial dynamics of the ADV bubbles, play a critical role in diagnostic ultrasound.For applications such as tissue engineering, a comprehensive understanding of the longer-term behavior of the generated bubbles is also essential. In this study, for the first time, we integrated ultra-high-speed microscopy and confocal microscopy to provide a full-scale characterization of ADV dynamics from nanoseconds to seconds timescales.We investigated the ADV dynamics of three common phase-shift droplets − perfluoropentane (PFP), perfluorohexane (PFH), and perfluorooctane (PFO) − in fibrin-based hydrogels.Effects of fibrin concentration, excitation frequency, and bulk boiling point of phase-shift droplets on ADV dynamics were investigated.Fibrin was chosen as the hydrogel for its abundance as a provisional matrix during wound healing, FDA approval as a hemostatic sealant, and its broad range of biomedical applications [27,28].The fibrin concentration range was chosen to mimic soft tissues with elastic moduli lower than 10 kPa.This approach is crucial in enhancing our understanding of how the surrounding microenvironment impacts the ADV response.Such elucidation is pivotal for advancing the development of innovative in situ microrheology methods.These methods are particularly useful in the context of pathological conditions such as fibrosis and cancer, where local mechanical properties of tissues change overtime.PFP droplets require low acoustic energies for activation (i.e., low ADV threshold), and thus have been used in ultrasound imaging applications [9].Droplets with comparatively higher bulk boiling points are predominantly used in therapeutic applications such as drug delivery and tissue regeneration [10,11].Therefore, the findings here provide useful insight for both therapeutic and diagnostic applications of ultrasound.For example, in drug delivery applications, on-demand modulation of critical parameters such as dose and release rate remains a significant challenge.Designing payload-carrying phase-shift droplets and tailoring their response to ultrasound to achieve desired release kinetics necessitates a thorough understanding of the underlying physics.Several fundamental parameters, such as expansion ratio, collapse time, collapse radius, and radius at quasistatic equilibrium, were derived from our ultra-highspeed and confocal studies.This paper is the first to provide an indepth understanding of ADV dynamics by analyzing fundamental parameters extracted from radius vs time curves.These key parameters are important for optimizing the use of phase-shift droplets and tunning the mechanical effects in a variety of biomedical applications. Microfluidic production of phase-shift droplets Monodisperse microdroplets were generated via a microfluidicbased chip (Cat# 3200146, Dolomite, Royston, United Kingdom).The details of production of phase-shift microdroplets can be found in our prior publications [29,30].PFP (CAS# 355-42-0, Strem Chemicals), PFH (CAS# 355-42-0, Strem Chemicals) or PFO (CAS# 307-34-6, Sigma-Aldrich) was used as the perfluorocarbon phase.For ease of visualization of droplets during confocal imaging, Cascade Blue dextran (Thermo Fisher Scientific, Waltham) was used.A solution of 50 mg/mL Pluronic F68 (CAS# 9003-11-6, Sigma-Aldrich) in phosphate buffer saline (PBS, Thermo Fisher) encapsulated the droplets.To produce phase-shift droplets with a nominal diameter of ~12 µm, flow combination was set at 0.5 µL/min and 2.5 µL/min for the perfluorocarbon containing phase and the Pluronic F68 solution, respectively.Due to its relevance to our prior publications as well as other studies within the tissue regeneration community, we used larger size phase-shift droplets.In this work, we refer to each phase-shift droplet by its respective perfluorocarbon (e.g., PFP droplets).Size characteristics of the droplets, measured using a Coulter Counter (aperture tube: 30 µm, Multisizer 4, Beckman Coulter), are summarized in Table 1. Preparation of fibrin-based hydrogels Details of preparing fibrin hydrogels containing phase-shift droplets are described in our prior publications [28,29].Briefly, bovine fibrinogen solution (Sigma-Aldrich) was prepared at a concentration of 80 mg/mL in Dulbecco's modified Eagle's medium (DMEM, Thermo Fisher Scientific).The fibrinogen solution was then degassed for an hour.Fibrin-based hydrogels were prepared by combining the fibrinogen solution, DMEM, thrombin (Recothrom, Baxter, Deerfield, IL, USA), and phase-shift droplets.To polymerize fibrin hydrogels, thrombin with a concentration of 2 U/mL was added in the final step.We used a low volume fraction of droplets (0.001 % (v/v)) to minimize inter-droplet interactions (inter-droplet spacing: ~260 μm [34]).To study the effect of fibrin concentration on ADV dynamics of PFP droplets, hydrogels with PFC liquid Chemical formula Bulk boiling point ( C).Samples were then subjected to oscillatory shear at 1 Hz with a strain of 1 %, based on the linear viscoelastic regime determined previously [30], to obtain shear elastic and loss moduli.The concentrations were chosen to mimic soft tissue properties [35,36].For example, elastic moduli of 0.3-1.6 kPa, 0.6-1.7 kPa, and 8.7-27.8kPa have been reported for brain, liver, and muscle tissues, respectively [37,38].Hydrogels with a fibrin concentration of 1 % (w/v) were prepared to investigate the effect of excitation frequency and perfluorocarbon bulk boiling point.Fibrin was fluorescently labeled with 39 µg/ mL Alexa Fluor 647-labeled fibrinogen (F35200, Molecular Probes, Eugene, OR, USA) for confocal imaging.Hydrogels (height: ~2 mm) were directly polymerized in a custom-made, 3D-printed (Form 3L, Formlabs, Boston, USA) cylinder with an opening in the bottom for a cover slip.DMEM was used as the overlying media to couple with the ultrasound transducer. Experimental setup and parameters Focused single-element transducers (H-101 (f#: 0.98), and H-147 (f#: 0.83), Sonic Concepts Inc., Bothell, WA, USA) were used to induce ADV.Transducers were mounted on a 3-axis translation stage to confocally align the acoustic and optical foci.Calibration was performed in free field using an in-house built fiber optic hydrophone as described before [39,40].Acoustic pressures are reported as peak rarefactional pressures in the manuscript (P r ).Each transducer was driven by a single, sine-wave ultrasound burst generated by a waveform generator (33220A, Agilent Technologies).Generated signals were amplified by a radiofrequency amplifier (240L, Electronic and Innovation Ltd).Experiments were conducted at three clinically relevant excitation frequencies: 1 MHz (P r : 4 MPa), 2.5 MHz (P r : 4 MPa), and 9.4 MHz (P r : 4 MPa).A higher pressure was used to generate ADV in PFO droplets at 2.5 MHz (P r : 4.5 MPa).The selected P r was suprathreshold for ADV for all phase-shift droplets studied herein.In fluids such as water, inertial cavitation (IC) occurs when the generated bubble grows to at least twice its original diameter, followed by a violent collapse driven by the inertia of the surrounding media.We previously showed that ADV occurred at lower rarefactional pressures than IC, indicating that phase transition precedes IC [40].However, at lower frequencies and in phase-shift droplets with higher bulk boiling points, ADV and IC could happen at similar acoustic pressures [41].The selected P r was above the IC threshold for phase-shift droplets at 1 MHz and 2.5 MHz [40,42].In our prior studies IC thresholds of 1.7 ± 0.2 MPa and 5.6 ± 0.4 MPa were measured in fibrin-only hydrogels at 1.1 MHz (pulse repetition frequency (PRF): 100 Hz) and 2.5 MHz (PRF: 100 Hz), respectively [40,41].However, in the current study no IC was observed in fibrin-only hydrogels due to the use of single burst ultrasound.The number of cycles was kept constant (N = 6) throughout the experiments. For ultrasound exposure, the mounted transducer was coupled with hydrogels containing phase-shift droplets through 3D-printed coupling cones filled with deionized water (Fig. 1a).An ultra-high-speed camera (HPV-X2, Shimadzu), providing a total of 256 frames at frame rates up to 10 million frames per second (Mfps), was paired with the confocal microscope (AX, Nikon) with a 20x objective (PLAN APO, N.A: 0.8, Nikon) (Fig. 1a).A high-intensity pulsed laser (400 W, 640 nm, Cavitar Ltd) was used in a transmittance configuration for high intensity, strobed illumination.The collimated illumination laser was aligned with the transducer with a 20-mm opening using a combination of two 50-mm plano-convex lenses, and two infinitely-corrected tube lenses (Thorlabs, New Jersey, USA).Exposure time was set to 50 ns.High-speed video frames were analyzed to determine the bubble radius at every time point (Fig. 1b & c) using a custom image processing algorithm developed in MATLAB (The MathWorks) and TEMA Pro (Image Systems, Linkoping, Sweden), an advanced, commercially available motion tracker software.Based on the speed of sound in water at 37 • C and the focal length of the transducers, the camera recording was delayed appropriately to start a few nanoseconds before the arrival of ultrasound and acquired a total window of 25.6 µs.The synchronization of laser illumination, ultra-high-speed imaging, and ultrasound was accomplished using the same waveform generator. Time-dependent, diffusion-driven growth of the ADV bubbles was acquired and analyzed using the manufacturer provided software (NIS-Elements AR 5.4, Nikon).ADV events were initiated approximately 1 mm above the cover slip and at x-y positions of at least 5R max away from any previously exposed location or edge of the hydrogel. Quasistatic growth of ADV bubbles in fibrin-based hydrogels Diffusion-driven ADV bubble growth recorded by confocal imaging (~O(s)) was further fit to a modified Epstein-Plesset theory including the material's nonlinear elastic properties to calculate the growth rate.In the original Epstein-Plesset formula [43], the gas pressure in a bubble at equilibrium is given by p g = p ∞ + 2γ R .When accounting for the elastic behavior of the material, the gas pressure can be calculated by adding a material stress term S corresponding to the elastic behavior of the material.The form can be chosen for a suitable material model in general, and for our choice of Neo-Hookean-like elasticity, S is given by [44]: .Following the approaches of [45,46] and incorporating the material stress term, we determined the relation for the quasistatic bubble radius growth rate in fibrin hydrogels to be where D is mass diffusion coefficient (D = 2 × 10 -9 m 2 /s for air in water), ρ g is gas density in the bubble (1.2 kg/m 3 for air at ambient pressure), p ∞ is ambient pressure (101 kPa), G is the linear shear elastic modulus of fibrin hydrogels [41], c ∞ is gas concentration (kg/m 3 ) away from the bubble, c s is gas concentration at the bubble surface, and γ is surface tension (0.065 N/m) [47].The constant c s was determined based on Henry's law for oxygen and nitrogen gases in water at 37 • C at the ambient pressure and dimensionless ratio c∞− cs ρ g was treated as a fitting parameter [45].The effective elastic component, S * for the Neo-Hookean elastic parameter was found to be where R sf represents the equilibrium radius at which the surrounding medium is strain-free.The ratio of R sf /R was varied as a fitting parameter to reach the best fit.Viscoelastic properties of fibrin-based hydrogels, under quasi-static conditions, were determined using a rheometer. Statistics Statistical analyses were performed in MATLAB using one-way ANOVA test.The data are displayed as the mean ± standard deviation.We reported the number of independent replicates (n) in the caption for each figure.Significant differences between different conditions studied here are represented in box plots in Supplementary figures. Effect of fibrin concentration on ADV dynamics of PFP droplets Higher fibrin concentration significantly increased both elastic and loss moduli in fibrin hydrogels as summarized in Table 2. Spatiotemporally resolved ADV bubble dynamics, via ultra-high-speed imaging, are presented in Supplementary videos 1a-d for PFP droplets in fibrin hydrogels of varying concentrations under similar ultrasound parameters − a single burst of 2.4 μs at 2.5 MHz (P r : 4 MPa).At a frame rate of 10 Mfps, a time window of 20 µs including before, during, and after ultrasound was recorded.Based on the change in the refractive index of a liquid droplet into a gas bubble, ultra-high-speed imaging indicated that phase-change was induced within 0.8 ± 0.2 μs following the arrival of ultrasound.Fibrin concentration did not significantly impact the initiation of ADV.ADV dynamics of PFP droplets, including expansion, subsequent oscillations, collapse, and rebounds are shown in bubble radius (R) versus time (t) plots (Fig. 2a-d) at varying fibrin concentrations.Each R-t curve was further normalized by the initial radius of a PFP droplet (R 0 ), termed the expansion ratio (Λ=R/R 0 ), and the characteristic Rayleigh collapse time (t RC ) (t * = t/t RC ), where 3 kPa is the far-field pressure, and ρ = 1060 kg/ m 3 is the mass density of fibrin hydrogels [44,48] (Fig. 2e-h).From Fig. 2, key parameters of evolution of radial dynamics, as shown schematically in Fig. 1b, were extracted for each fibrin concentration and summarized in Table 3. Data in Table 3 were further analyzed for statistical significance for each condition and are plotted in Fig. S1. The key parameters are as follows: maximum bubble radius (R max ) at the peak time (t = t peak ), maximum expansion ratio (Λ max ) defined as the ratio of an expanded bubble during ADV to the corresponding initial droplet radius (R 0 ), the minimum bubble radius (R c ) during collapse, collapse time (t c ) which is the time when the bubble reached its maximum size to its final collapse radius, rebound radius (R reb ), resting bubble radius at the end of high-speed microscopy at t = 20 µs (R rest ), and maximum expansion velocity (V max ).R max correlated inversely with fibrin concentration, decreasing by ~ 45 % when fibrin concentration increased from 0.2 % to 8 % (w/v).The effect of fibrin concentration was further reflected in Λ max , showing significant differences (p < 0.001) among all concentrations except for 0.2 % and 1 % fibrin hydrogels (Fig. S1).This observation has important implications for biomedical applications, since larger Λ max values lead to larger strains and higher strain rates to the surrounding media.At t peak , the surrounding fibrin hydrogel is out of dynamic equilibrium, with surface tension and material stress greater than the internal bubble pressure, ultimately driving the bubble collapse.At bubble collapse, the stored potential energy of the expanded bubble is converted into kinetic energy of the surrounding medium [49].Increasing fibrin concentration reduced the initial collapse time, due to increased medium elasticity.During the collapse phase, the bubble contents compressed until the pressure inside the Table 2 Bulk shear elastic and loss moduli of fibrin hydrogels were characterized using a rheometer.bubble was sufficiently large to balance the contraction forces [50].The collapse time, t c , was significantly shorter in higher fibrin concentrations.The collapse was completely arrested at 12.1 ± 0.8 µs, 11.0 ± 0.6 µs, 5.1 ± 0.6 µs, and 4.9 ± 0.8 µs in hydrogels of 0.2 %, 1 %, 4 %, and 8 % fibrin concentrations, respectively, following which the bubble began to rebound.R reb was significantly smaller than R max in all fibrin concentrations, most likely due to viscous dissipation and acoustic radiation during the initial collapse [51].The amplitude of R reb , indicative of the effect of fibrin viscosity, inversely correlated with fibrin concentration.However, the ratio of R reb /R max was 0.5 ± 0.05, which was not statistically significant among the four fibrin concentrations studied. There was no statistically significant difference in V max among the different fibrin concentrations (Table 3, Fig. S1).Post-ultrasound dynamics was also impacted by fibrin concentration.R rest of the resultant ADV bubbles from PFP droplets was significantly larger in 0.2 % (w/v) fibrin hydrogels compared to 4 % (w/v) (p-value < 0.0001) and 8 % (w/ v) (p-value < 0.0001) fibrin hydrogels.The diffusion-driven growth of ADV bubbles, because of gas influx, was further captured by confocal imaging up to 60 s (Fig. 3). As shown in Fig. 3, growth of the bubbles was hindered in higher fibrin concentrations.The time scale of passive diffusion (~O(s)) was six orders of magnitude slower than the ADV dynamics (~O(μs)).R eq was significantly different among all four fibrin concentrations (p-value < 0.001), being larger in lower fibrin concentrations (Fig. S1).The modified Epstein-Plesset equation (Eq.( 1)), accounting for surface tension and fibrin elasticity, was used to estimate diffusion-driven growth rate of the ADV-bubbles.Differences in growth rates of ADVinduced bubbles in fibrin-based hydrogels are shown in Fig. 3d. Effect of frequency of excitation on the ADV dynamics of PFP droplets A single burst consisting of 6 cycles at 4 MPa peak rarefactional pressure generated ADV at three clinically relevant frequencies.The effect of excitation frequency (1 MHz, 2.5 MHz, and 9.4 MHz) on the ADV dynamics of PFP droplets embedded in 1 % (w/v) fibrin hydrogels is presented in Supplementary videos 5-7.The corresponding R-t plots are shown in Fig. 4. Initiation of ADV, based on change in the refractive index from liquid to gas, was significantly shorter at 9.4 MHz compared to the other two frequencies (p-value = 0.002).Parameters extracted from R-t (Fig. 4a-c) and the corresponding Λ-t* curves (Fig. 4d-f) were further analyzed for statistical significance (Fig. S2), indicating a significant impact of the excitation frequency on the radial dynamics of PFP droplets. ADV-induced PFP bubbles had a significantly higher Λ max at 1 MHz compared to 2.5 MHz (p-value < 0.0001) and 9.4 MHz (p-value < 0.0001).As can be observed in Supplementary videos 5-7 and Fig. 4, ADV was more localized and confined within PFP phase-shift droplets at 9.4 MHz.Additionally, t c significantly reduced, up to two-fold, as the excitation frequency increased.Similar to trends observed with fibrin concentration, higher excitation frequency significantly reduced the amplitude of R reb (Table 4, Fig. S2).Although V max was not significantly different at the two excitation frequencies of 1 MHz, and 2.5 MHz, it decreased by up to 60 % at an excitation frequency of 9.4 MHz (Fig. S2). Stable bubble formation was observed at all excitation frequencies, however the resultant ADV bubbles had significantly smaller R rest and R eq at 9.4 MHz compared to 2.5 MHz and 1 MHz (Fig. 5).Differences in growth rates of ADV-induced bubbles at different frequencies are shown in Fig. 5d.Table 4 summarizes the derived parameters from the experimental high-speed and confocal microscopy studies at varying frequencies. Effect of bulk boiling point of phase-shift droplets on the ADV dynamics In addition to the surrounding media viscoelasticity and excitation frequency, bulk boiling point of phase-shift droplets is another important determinant of phase-change and cavitation properties.Representative ultra-high-speed ADV dynamics for PFP, PFH and PFO droplets in 1 % (w/v) fibrin hydrogels are shown in Supplementary videos 6-8 and plotted in Fig. 6.ADV initiation was not significantly different among the three phase-shift droplets and occurred within a microsecond after the arrival of ultrasound. Bulk boiling point of phase-shift droplets significantly affected parameters during and post ultrasound (Table 5, Fig. S3).For example, R c , R rest , R equ were significantly smaller in PFH and PFO droplets compared to PFP droplets, most likely due to a significantly higher vapor pressure and as a result higher internal pressure inside the generated PFP bubble. Although the ratio of R rest / R 0 was ~ 1.1 for both PFH and PFO droplets, indicating partial or complete recondensation, confocal microscopy consistently captured stable bubble formation and micropore formation for PFH and PFO droplet, respectively, on a longer timescale (~O(s)) (Fig. 7).Due to a significantly lower vapor pressure, no stable bubble formation was observed for PFO droplets after ultrasound was turned off, evident by an insignificant change in temporal evolution of radius in the recorded confocal images (Fig. 7c & d). Discussion We integrated ultra-high-speed imaging, confocal microscopy, and focused ultrasound for full-scale characterization of ADV dynamics of three commonly used phase-shift droplets within a viscoelastic environment.Radial dynamics of ADV bubbles, key to their effectiveness for both imaging and therapeutic applications, are a function of intrinsic features (e.g., droplet size and bulk boiling point) and extrinsic factors (e.g., driving frequency and surrounding media).Given the potential applications of phase-shift droplets in biomedical applications, there is a need to extend our understanding to viscoelastic media.The effect of surrounding viscoelasticity has not been explored in the past since prior experimental high-speed studies of ADV dynamics were conducted primarily in Newtonian fluids [21][22][23]52].Fibrin was chosen as the hydrogel for its broad range of biomedical applications [27,28].Due to the high temporal resolution of our ultra-high-speed camera (up to 10 Mfps), different stages from liquid to gas expansion, oscillation of the generated ADV bubbles, to their collapse and subsequent rebound were captured, as can be seen in the R-t plots presented in the manuscript. Our rheological measurements, under quasistatic conditions, showed that both elastic (storage) and loss moduli correlated with fibrin concentration (Table 2).Understanding how changes in the microstructure and stiffness of the surrounding media impact radial dynamics of ADV bubbles is critical since the mechanical properties of tissues vary and can Table 4 Summary of parameters derived from ultra-high-speed and confocal microscopy studies (Figs. 4 & 5) for perfluoropentane phase-shift droplets embedded in 1% (w/v) fibrin-based hydrogels and exposed to ultrasound at three different excitation frequencies.Definition of parameters is given in Table 3 change as a result of disease.For example, elastic moduli in healthy and cancer pancreatic tissues were reported to be around 1 kPa and 6 kPa, respectively.In the current study, ADV bubbles achieved a larger maximum radius in lower fibrin concentrations.As fibrin concentration increased from 0.2 % (w/v) to 8 % (w/v), Λ max significantly decreased by ~ 50 % (Fig. 2 and Table 3).Λ max is an important quantity for developing safe and effective diagnostic and therapeutic applications, as our prior work has found correlation between maximum strain at bubble walls with damage in cell cultures exposed to inertial cavitation [53].Larger Λ max in ADV-induced bubbles similarly leads to larger strains to the surrounding.Ultra-fast tracking of particle displacements induced by ADV indicated that radial strains (engineering strains) were largest at maximum bubble radius and correlated directly with Λ max [54].A radial strain of ~ 2 was measured for a PFP bubble with a Λ max of ~ 3.5 in 1 % fibrin hydrogels.The ADV-induced strains were hyperlocal and decreased significantly with distance from the bubbles [54].Radial strain fields (logarithmic Hencky strain) were shown to decay as ~ 1/r 3 , where r is the distance from the center of the laser-induced bubble [55]. In earlier study of laser-induced bubbles in polyacrylamide hydrogels, using the quadratic law Kelvin-Voigt model to represent soft materials with strain-stiffening behavior, elastic stresses were shown to be highest at R max , where strain was largest, while peaks in viscous stress occurred at the onset of bubble growth and at R c , where strain rate was largest [56].In the context of mechanical characterization of soft biomaterials at high strain rates, dynamics of laser-induced bubbles in gelatin [57] and agarose [58] hydrogels were studied via ultra-high-speed imaging.Similar to our observation, a 10-fold increase in agarose concentration reduced R max of laser-induced bubbles (R max : ~260 μm in 0.5 % agarose vs R max : ~184 μm in 5 % agarose).This emphasizes the sensitivity of bubble dynamics to surrounding properties as previously supported by several numerical studies [26,44,59].Specifically, an increase of fibrin concentration increases elastic modulus, which serves to decrease the maximum attainable radius of the bubble via stress built up in the surrounding material during expansion.Notably, the sensitivity to the surrounding viscoelasticity was shown numerically for micron-sized bubbles with radii smaller than 50 µm [60].Changes in radial dynamics as a result of the surrounding media can significantly impact the scattering cross section of the generated bubbles and thus their acoustic emissions, both linear resonance properties and nonlinear behavior, making ADV bubbles suitable probes of local tissue characterization.As fibrin concentration increased, growth and collapse velocities of the generated ADV bubbles were significantly faster (Fig. S1).Prior numerical investigations also showed that fluid elasticity accelerated the collapse of the bubbles [61,62].According to Rayleigh collapse, larger R max leads to a longer collapse time as was observed for PFP bubbles in lower fibrin concentration.ADV-generated bubbles settled to diameters significantly smaller than the maximum diameters reached during the expansion phase (i.e., R rest < R max ).The ratio of R max /R rest was 2.4, 2.2, 2.3, and 1.9 for ADV-generated bubbles at 2.5 MHz in fibrin hydrogels of 0.2 % (w/v), 1 % (w/v), 4 % (w/v), and 8 % (w/v) with PFP phase-shift droplets, respectively.Knowing R rest and R max can help in applications such as sonoporation where the distance of cells with respect to phaseshift droplets (d) can define cellular bioeffects such as irreversible sonoporation (d < R rest ), reversible sonoporation (R rest < d < R max ), or no sonoporation (d > R max ) [54,63,64].Fibrin concentration impacted temporal growth of the ADV generated bubbles in PFP droplets on a longer time scale as well.Time-lapse confocal imaging recorded the passive diffusion-driven growth (i.e., in the absence of a changing pressure field) of the ADV-induced bubbles up to 60 s after ultrasound was turned off.The slow growth rate of ADVgenerated bubbles, which correlated inversely with fibrin concentration, may be driven by the layer of highly dense fibrin surrounding the bubble that provides shell-induced resistance to gas transfer [65].The high driving pressure used here (i.e., P r : 4 MPa) led to non-spherical oscillations of ADV bubbles particularly in lower fibrin concentrations (Videos S1-4).However, the generated ADV-bubbles appeared spherical in confocal images, taken on a longer time scale compared to the ADV time scale, due to surface tension and stabilizing effects of the material's viscoelasticity at longer timescales.It should be noted that the ultrasound forcing period used here was on the order of 106 ns (at 9.4 MHz) to 1 µs (at 1 MHz), which was much shorter than the fibrin relaxation time (~O(ms)) [66].This led to strain buildup in the matrix during ADV.Since fibrin, like other natural biomaterials, exhibits strain stiffening behavior, ADV-generated bubbles can induce significant compaction and stiffening at the bubble-fibrin interface [66,67].ADV-induced strain stiffening, evident by significantly brighter fluorescence intensities in confocal images (Figs. 3, 5, and 7) post-ultrasound, was observed for all the three phase-shift droplets regardless of stable or transient bubble formation.This is important since ADV-induced stiffening in natural biomaterials can enable on-demand modulation (i.e., both spatially and temporally) of cellular phenotype and signaling [67,68]. Excitation frequency and pressure are two critical parameters associated with bubble dynamics.In this work, all three frequencies of ultrasound triggered ADV in PFP droplets.Frequency of excitation had the most pronounced effect on ADV dynamics of PFP droplets at a constant fibrin concentration.Λ max at 1 MHz was 2 times higher than 2.5 MHz, and 7 times higher than 9.4 MHz (Table 4).This is expected since bubble growth is inversely proportional to the excitation frequency since lower frequency ultrasound provides a longer window for growth of the nucleated bubble via rectified diffusion [69].A longer rarefactional pressure at low excitation frequency results in a larger maximum bubble diameter.In applications where maintaining an upper limit on bubble size is critical, careful consideration should be given to the selection of the frequency.Higher excitation frequencies also enable better spatial selectivity of ADV due to a smaller focal volume as well as lower probability of inertial cavitation related damage to large molecular payloads or adjacent cells.In our studies, initiation of ADV was faster at 9.4 MHz likely due to superharmonic focusing being more prominent in micron-size droplets (larger than 6 μm in diameter) at higher frequencies where the wavelength of higher harmonics becomes comparable to the droplet size [22,23]. Ultra-fast dynamics of phase-shift droplets with higher bulk boiling points remain less explored.Despite the vapor pressure in PFP droplets being three time higher than PFH droplets at the ambient condition (Table 1), there was no significant difference in Λ max .This is likely due to the vapor pressure (P v ) becoming less significant as the driving acoustic pressure increases (i.e., P r ≫ P v ).Using classical nucleation theory, we demonstrated that the dependence of the critical radius of nucleation on the saturated vapor pressure of perfluorocarbon liquids was greatly reduced at high peak rarefactional pressures [42].However, post-ADV dynamics were distinct for each phase-shift droplet.Stable bubble formation in PFH droplets can be attributed to inward gas diffusion during ADV enhanced by the higher amplitude and long pulse durations used here (i.e., rectified diffusion).Contrastingly, PFO droplets recondensed.The notably reduced vapor pressure along with lower solubility and mass diffusivity of oxygen decreased the volume of noncondensed gas and, thus, the likelihood of survival of a bubble in a PFO droplet.Higher bulk boiling point phase-shift droplets such as PFO not only offer enhanced thermal stability (i.e., preventing spontaneous bubble formation), they also enable a unique potential for transient bubble formation (i.e., during ultrasound).PFO droplets consistently underwent fragmentation (Video S10) after ultrasound was turned off. Future studies will explore the dynamics of phase-shift droplets when administered in populations since biomedical applications rarely rely on activation of a single droplet.Furthermore, we will investigate how the various ADV bubble dynamics observed in this study impact the release kinetics of payload-carrying phase-shift droplets and the resulting bioeffects via ultra-high-speed fluorescence microscopy. Conclusions ADV provides a dynamic platform for on-demand generation of microbubbles for both diagnostic and therapeutic applications of ultrasound.We investigated the dynamics of ADV by combining ultra-fast and confocal microscopy techniques.The findings demonstrate that dynamics of ADV can be tuned by intrinsic features, such as bulk boiling point of phase-shift droplets, or extrinsic factors, such as surrounding media viscoelasticity and acoustic parameters.ADV bubbles achieved a larger maximum radius in lower fibrin concentrations.The significant impact of fibrin concentration on key parameters such as Λ max can open doors to a novel ADV-assisted tissue characterization technique.Three clinically relevant frequencies triggered ADV in PFP droplets resulting in stable bubble formation, although the dynamics were significantly different among the three frequencies during and post ultrasound.Similar to higher fibrin concentrations, higher driving frequencies suppressed maximum bubble expansion radii.We showed that under similar acoustic conditions, stable or transient bubble formation depended on the bulk boiling point of the phase-shift droplets, enabling unique ADV dynamics which can be tailored for specific applications.Overall, this work provides a full-scale characterization, from nanosecond to seconds, of ADV-bubble dynamics under varying conditions which could aid in optimizing current applications as well as exploiting future opportunities of ADV. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig 1 . Fig 1. Ultra-high-speed and confocal microscopy techniques were integrated with focused ultrasound to study the dynamics of acoustic droplet vaporization (ADV) under varying conditions in soft biomaterials composed of fibrin.ADV dynamics of three commonly used phase-shift droplets were studied: perfluoropentane (PFP), perfluorohexane and perfluorooctane.(a) Pictures of the integrated experimental setup showing the confocal microscope, ultra-high-speed camera, and focused ultrasound transducer.Ultra-high-speed imaging was performed in a back-illumination configuration at frame rates up to 10 million frames per second.Confocal microscopy was performed at 1 frame per second.(b) Images were processed to plot the temporal evolution of bubble radius, capturing bubble expansion, oscillation, collapse, and subsequent rebounds as shown schematically.Parameters such as initial radius (R 0 ), maximum expansion radius (R max ), minimum radius at the first collapse (R c ), secondary rebound radius (R reb ), radius at the end of high-speed imaging (i.e., at 20 μs) (R rest ), and radius at the quasistatic equilibrium (i.e., at 60 s) (R eq ) were derived and compared for varying conditions.(c) Selected ultra-high-speed and confocal image sequences of a PFP droplet embedded in a 0.2 % (w/v) fibrin hydrogel exposed to a single burst of 2.4 µs at 2.5 MHz (P r : 4 MPa).Presence of ultrasound in the field of view is denoted with an asterisk.Scale bar: 25 µm. Table 3 Summary of parameters derived from ultra-high-speed and confocal microscopy studies (Figs. 2 & 3) for perfluoropentane phase-shift droplets embedded in fibrinbased hydrogels at varying fibrin concentrations.Statistical analyses of the parameters are shown in Fig. S1. Fig 3 . Fig 3. Longitudinal, diffusion-driven growth of perfluoropentane (PFP) phase-shift droplets vaporized via acoustic droplet vaporization (ADV) was recorded via confocal microcopy.PFP droplets (shown in cyan) were embedded in varying fibrin concentrations (0.2-8 % w/v) (shown in red).Selected confocal images of growing PFP bubbles in (a) 0.2 % and (b) 8 % fibrin hydrogels are shown.(c) Longitudinal growth of ADV bubbles up to 60 s is plotted for each fibrin concentration.(d) Modified Epstein-Plesset equation (Eq.(1)) was used to calculate the growth rate of ADV bubbles in each fibrin concentration.Acoustic parameters were as follows: a single burst of 2.4 µs at 2.5 MHz (P r : 4 MPa).Scale bar: 25 µm. Fig 4 . Fig 4. Acoustic droplet vaporization (ADV) dynamics of perfluoropentane (PFP) phase-shift droplets at three clinically relevant frequencies were recorded via ultrahigh-speed microscopy at 10 Mfps.PFP droplets embedded in 1 % fibrin hydrogels.Bubble radius (R) versus time (t) curves (a-c) and the corresponding expansion ratio (Λ = R/R 0 ) versus normalized time (t * ) (non-dimensionalized by the Rayleigh collapse time (t RC ) (t * = t/t RC )) (d-f) are shown for three excitation frequencies of 1 MHz, 2.5 MHz, and 9.4 MHz.A single burst consisting of 6 cycles at 4 MPa peak rarefactional pressure was used to generate ADV (n = 11 for 1 MHz, n = 12 for 2.5 MHz, and n = 13 for 9.4 MHz). Fig 5 . Fig 5. Confocal microcopy was used to record the longitudinal growth of bubbles generated via acoustic droplet vaporization at different frequencies.PFP droplets (shown in cyan) were embedded in 1 % (w/v) fibrin hydrogels (shown in red) and exposed to a single burst ultrasound (6 cycles) at (a) 1.1 MHz (P r : 4 MPa), and (b) 9.4 MHz (P r : 4 MPa).(c) Time-dependent growth of ADV bubbles up to 60 s are shown.(d) Modified Epstein-Plesset equation (Eq.(1)) was used to calculate the growth rate of ADV bubbles in 1 % fibrin at varying frequencies.Scale bar: 25 µm. Fig 7 . Fig 7. Stable bubble formation via acoustic droplet vaporization depended on the bulk boiling point of phase-shift droplets.While stable bubbles were consistently generated in perfluoropentane (PFP) and perfluorohexane (PFH) phase-shift droplets embedded in hydrogels with 1 % (w/v) fibrin concentration, micropores were generated when perfluorooctane (PFO) droplets were used.(a) Representative confocal images of a PFH phase-shift droplet (shown in cyan) embedded in a fibrin hydrogel (shown in red) display diffusion-driven growth of the generated bubble up to 60 s after ultrasound.(b) ADV generated a fluid-filled micropore with no significant change in size over time for a PFO droplet.(c) Longitudinal growth of PFP, PFH, and PFO droplets are plotted from 1 s up to 60 s after ultrasound.(d) Modified Epstein-Plesset equation (Eq.1) was used to calculate the growth rate of the generated ADV features in 1 % fibrin.Droplets were exposed to a single burst of ultrasound (number of cycles: 6) at 2.5 MHz (P r : 4 MPa) for PFP and PFH droplets.A higher pressure (P r : 4.5 MPa) was used for PFO droplets.Scale bar: 25 µm. and shown schematically in Fig.1b. Table 3 and schematically shown in Fig.1b.
2024-01-11T16:13:19.558Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "1fd941996b11ef6033435d6d18299ed38045c1a6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ultsonch.2024.106754", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d9fbbc0b64ac40e47c2709af9b2687e8954e2ec", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
127339721
pes2o/s2orc
v3-fos-license
Ball convergence for a fourth order iterative method Numerical iterative methods are shown to be convergent using hypotheses on higher order derivatives but these derivatives do not appear in the body structure of these methods. Therefore, the usage of them is limited although they may converge. In this article, we demonstrate the convergence but using hypotheses only on the function’s derivative of order one. In this way, we extend the usage of these methods. In addition, we present the computable radii of convergence of the considered scheme and error bounds in accordance with Lipschitz parameters. Moreover, we suggest one counter example where previous studies was not applicable but our results. Finally, we check our results on some other examples and also provide computable radii of convergence. Introduction Several nonlinear problems can be written like where G : D ⊂ E 1 → E 2 is a differential operator on a convex subset D in the sense of Fréchet, E 1 and E 1 are two Banach spaces. In 2015, Artidiello et al. [1] were studied a two-step method, which is given by where , u 0 is an initial point, [·, ·; G] : D × D → L(E 1 , E 2 ), H : L(E 2 , E 2 ) → L(E 1 , E 2 ), a 1 , a 2 , b 1 , b 2 ∈ S and S = R or C. In addition, L(E 1 , E 2 ) stands for the space of bounded linear operators mapping to E 1 in to E 2 . Artidiello et al. [1] suggested the convergence analysis of iterative scheme (2) for the particular case when E 1 = E 2 = R m , using Taylor expansions. In this regards, they assumed hypotheses up to the fourth-order derivative of considered function G, but only the first-order derivative involves in the scheme (2). They also mentioned the benefits of their iterative methods over the We obtain for u * = 1 that Undoubtedly, the third derivative is unbounded in E 1 . We have a plethora of iteration functions for obtaining the zeros of nonlinear functions . In these studies, authors mentioned that the starting point u 0 should be sufficiently near to the required solution u * , in that case the obtained sequence {u n } will sure converges to u * . However, the closeness of the initial is not well defined. Our mean to say that how much closeness of starting guess u 0 to the required solution should be required for granted convergence. These local outcomes are not giving any details regarding the radius of convergence ball of the corresponding iterative scheme. We suggest the answer for these queries for iterative scheme (2) in section 2. Moreover, we can also use the similar approach for other existing iterative methods. We expand the usage of scheme (2) by adopting hypotheses only on the first order derivative of the considered function G and the contractions on a Banach space setting. We use Lipschitz parameters instead of Taylor expansions. Moreover, our approach does not require derivatives of high order to illustrate the convergence order of scheme (2). In this way, we extend the usage of iterative scheme (2). The study is arranged as follows. We study the local convergence in section 2. In addition, we give a radius of convergence ball, uniqueness of the results and computable error, which were not studied in the earlier work. In the concluding section 3, we also discussed some particular cases and numerical experimentation. Local convergence Here, we propose the local convergence study of iterative method (2). Let us consider that L > 0, L 0 > 0, L 1 > 0, L 2 > 0, L 3 > 0, M ≥ 1, a 1 ∈ S, a 2 ∈ S, b 1 ∈ S − {0} and b 2 ∈ S − {0} be some parameters. In addition, we assume that H : [0, +∞) → [0, +∞) is a nondecreasing and continuous function. We introduce some functions p 1 , q, h q on interval [0, 1 L 0 ) in the following way: h q (w) = q(w) − 1 and the value of parameter r 1 is define as follows: Since, the function h q (0) = −1 < 0 and h q (w) → +∞ as w → . Let us consider the r q is the smallest zero among the other zeros. In addition, we define some more functions p 2 and h 2 on interval [0, r q ) in the following way: and Again, the function h 2 (0) = −1 < 0 and h 2 (w) → +∞ as w → r − q . We assume that r 2 is the smallest zero of function h 2 among other zeros in the open interval (0, r q ). We define Then, we further have and for each value of w from w ∈ [0, r) Let us consider that B(γ, ρ),B(γ, ρ), are the open and closed balls respectively, in the Banach space E 1 having center γ ∈ E 1 and radius ρ > 0. Now, we propose the local convergence study of iterative scheme (2), by adopting the preceding notations. Numerical Examples Here, we confirm our theoretical conclusions which we have presented in the earlier section 2. First of all, we provide the numerical outcomes obtained by using the presented methods on a scalar equation which is display in the test example 3.1. In addition, we assume [u, v; G] = 1 2 1 0 G (v + η(u − v))dη, H(w) = 1 + 2w, a 1 = 0 and a 2 = b 1 = b 2 = 1. Moreover, we also display the starting point, radius of convergence and minimum iterations are needed to reach the desire accuracy for corresponding solution of the considered problem. Further, we want to cross verify the theoretical convergence order. In this regard, we determine the computational convergence order by adopting the following formula or in the case where u * is not available, then we use the following approximate computational convergence order (ACOC) [16] ρ We adopt the following stopping criteria computer programming in order to solve nonlinear equations: where we consider the tolerance error as = 10 −550 . In the case of nonlinear systems, we assume two standard systems of nonlinear equations (in examples 3.2 and 3.3) for checking our theoretical results. We consider a H(w) = 1 + 2w, a 1 = 0 and a 2 = b 1 = b 2 = 1. That is we consider King's family [1,23,29]. We display the starting point, radius of convergence and minimum iterations are needed to reach the required accuracy for corresponding solution of the considered problem. In addition, we calculate the computational convergence order by adopting the multi-dimensional version of above mentioned formulas namely, (29) or (30) to verify the theoretical convergence order. We performed all the calculations/computations M athematica 9 (programming package) with multiple precision arithmetic for nonlinear equations and system. We adopt the following stopping criteria: (i) u (i+1) − u (i) < and (ii) G(u (i+1) ) < , where we assume the tolerance error for nonlinear system as = 10 −50 . Finally, we obtain ρ = 4.00000 with the initial guess (1.0009) and n = 4 is the minimum iterations needed to attain this. The derivative of above function in the sense of Fréchet is define as follows Notice, we obtain L 0 = e − 1, L = 1.789572397, M = 2, L 1 = L 2 = e−1 2 and L 3 = 1.789572397 2 . Then, by the definition of the r 1 and r 2 , we obtain r 1 = 0.382692, r 2 = 0.126408, and as a consequence by (3), we conclude that r = 0.126408. Further, we get ρ = 3.93786 by considering the initial guess (0.1, 0.09, 0.1) T and n = 4 is the minimum iterations needed to attain the required accuracy. We choose the following function F on D for u = (u 1 , u 2 ) T G(u) = (u 1 cos u 2 + e u 1 +u 2 , u 1 − 1 + u 2 ) T . The Fréchet-derivative of above function is given as follows G (u) = cos u 2 + e u 1 +u 2 e u 2 +u 2 − u 1 sin u 2 1 1 . Further, we obtain ρ = 4.00000 by assuming the initial guess (3.4, −2.4) T and n = 4 is the least number of iterations needed to get the needed accuracy.
2019-04-23T13:21:43.996Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "e4f0b8722cfd5a7650fb637dcd4eb5c536780b12", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1139/1/012035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f365158b03ef16e46724da41c4eee61c28fbbb62", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
150795497
pes2o/s2orc
v3-fos-license
Case Study – Greece Biological terrorism and the need for biological defence is a relatively new concept for Greece. Although defence against weaponized pathogens was part of CBRN training in the military, it was the 9/11 massacre followed by the anthrax letters horror that triggered a more active involvement of the Greek public health sector. In that historical moment a third bullet was added to the already existing disease outbreak classification – naturally, accidental and now deliberate. These incidents and the subsequent 2004 Olympic Games in Athens drove the Greek government to focus on biodefence and revise existing civil emergency planning by inclusion of new emerging threats. Introduction Naturally occurring or accidental outbreaks of a disease usually take place in both urban and country environments. Big cities are usually the targets of bioterrorism due to the high density of population resulting in both physical and psychological casualties. If the disease does not start from one's own country then early warning might be possible, leading to preventive measures all the way from the borders into the community. The H1N1 virus pandemic is an example of this globalization of medical information that is useful to both countries and citizens. Main Public Health Threats One important parameter of the epidemiology of infectious diseases is the movement of the populace either for professional, recreational or immigration reasons. In the past, moving from one location to another, even within the same country borders, took considerable time. In modern times, usually less than 24 h is needed to cross the world. We witnessed the contribution of faster travel to the spread of disease recently during the 2010 fl u pandemic. Apart from the legal movement of a population, mass illegal immigration also poses a signi fi cant problem in certain parts of the world -e.g. in Greece -in relation to the spread of a disease or reemergence of old diseases like malaria or tuberculosis. The geographical location of Greece and its porous borders due to the signi fi cant coastline make it an attractive destination for those seeking a better living environment or as a way to enter other EU countries as a fi nal destination. Greece receives a considerable number of tourists annually that exceeds its own population. Greeks also travel globally for the reasons mentioned above. This constant movement of a populace makes epidemiologic surveillance and disease prevention extremely dif fi cult. The reality of disease transmission as a result of immigration and travel is re fl ected below in the results from the Hellenic Centre for Disease Control and Prevention (CDCP) [ 2 ] and various relevant NGOs addressing the health status of immigrants and transmission of old and new infectious diseases. The percentage of declared cases of speci fi c diseases attributed to Greek citizens and immigrants is shown in Table 11.1 . Bioterrorism as a Potential Threat Timely information is crucial when it comes to a natural or accidental outbreak of a disease. This information might be bene fi cial to laboratory or institution workers or the population that needs to be protected. Of course, in most cases, basic hygiene measures (personal or collective, at home or in a wider infrastructure such as schools) can prevent these diseases. Defence against a deliberate outbreak of a disease requires intelligence. This type of medical intelligence is attributed to national intelligence service both civilian and military. Usually international collaboration is mandatory when weaponized pathogens are the problem. Risk identi fi cation and assessment contribute to national defence as well. It is a continuous process dealing with both the deliberate and non-deliberate forms of disease outbreaks. Internal (sanitary institutions, police reports, etc.) and external (neighboring countries, World Health Organization, EU public health surveillance systems, etc.) hints can assist experts to perform a risk assessment leading to an alert of the public health system. Current geopolitical instability and turmoil in our own region combined with the existing direct and indirect, overt and covert threats against Western societies make bioterrorism attacks a potential risk. Production of biological weapons is both easy and cost effective. Of course we must discriminate between the production and weaponization of pathogens that is not as easy and needs specialized equipment. Pathogen production does not require large factories and existing facilities in commercial infrastructure (food industry, drug industry) can be used for this purpose. On a smaller scale, pathogens can be cultivated in small laboratories or mobile caravans similar to those used to produce illegal drugs. Identi fi cation of such illegal laboratories is very dif fi cult. Viral pathogens are more dif fi cult to produce as compared to bacteria and also need some extra precautions and equipment. Large quantities of biological weapons can still be produced in a short period of time (days or weeks) in small laboratories. According to Kathleen C. Bailey, former Assistant Director, Of fi ce for Disarmament and Armaments Control, who visited many biotechnology and pharmacology companies, a complete biolab requires no more than a room of 4.5 m × 4.5 m and a budget of USD 15,000 for supplies [ 1 ] . In such a room, trillions of bacteria can be quickly produced with low risk and with minimum personal protection equipment such as a gas mask and a plastic suit over clothing. Dif fi culties relevant to the production of biological weapons include: Dif fi culties in the protection of workers at all levels of production, transportation, • and fi nal dispersal of biological weapons; Low level of training and expertise can lead to accidents and exposure to • pathogens; Vaccination of those involved is not always protective/effective; • Controlling the quality and quantity of produced material is dif fi cult; • Dispersion is not without problems since dispersal device explosives, UV expo-• sure, or weather conditions such as rain or drying may have negative effects on pathogens or spores; Storage of pathogens poses additional problems; speci fi c conditions are required • to maintain the ef fi cacy, and it is dif fi cult to maintain them in a form ready for dispersion over long periods of time. Preparedness and Response to Health Emergencies in Greece Key stakeholders in public health preparedness and response systems are: Epidemiologic Monitoring in Greece Epidemiologic monitoring is the systematic and continuous collection, analysis and interpretation of sanitary/medical information relevant to public health. The objectives of epidemiologic monitoring are: Follow-up of tendencies (estimate the impact of a disease or health problem Analysis of the Different Types of Epidemiologic Monitoring Systems The system of mandatory reporting of diseases represents the basis of epidemiologic monitoring in most countries; usually it is supplemented by more specialized systems, networks or studies with speci fi c objectives. The objectives of this system are: Speci fi c (for the system of mandatory reporting of diseases) -detection of spo-• radic cases; Detection of epidemic cases Generic (for every system of epidemiologic monitoring) -estimation of reper- Flow of Information The reporting process can start from the clinical or laboratory doctor or the hospital's infectious diseases nurse but has to be sent immediately (by fax) to the Regional Health Directorates and CDCP. The reporting form includes the following data: After reporting, evaluation of the validity/completeness of the reported elements will follow along with a thorough investigation of the case that will lead to a systematic/ rapid analysis and interpretation/export of the conclusions. Brie fi ng of public health/ sanitary/medical/nursing services will follow a complete evaluation of the system. 1998: Essential improvement of mandatory reporting system (National Centre of Epidemiologic Surveillance and Intervention). 2003: "Regulations applied for regional systems of health and providence", Art. 44, Law 3204/23-12-2003: CDCP -each private or public medical institution or individual doctor, operating legally, is obliged to inform CDCP of each case of pestiferous disease that comes to his/her attention. Hellenic Personal Data Protection Authority: 1997: "Protection of individuals from the manipulation/exposure of data of per-• sonal character", Art. 7, Law 2472/1997: Exceptionally, it is allowed: If it concerns subjects of health; -If it is executed by a health professional in duty of secrecy; -If it is essential for medical prevention. System of Illness Observers in the Primary Care Setting (Sentinel Physicians) This system was set in operation in 1999 and revised in September 2004. It deals with common diseases with minor indications (usually). Its scope is to support the health system through data gathering and processing, to make a clear estimate of diachronic trends and detect a possible epidemic elevation in an area or region. A large number of selected primary care doctors participate in this system/ programme. These doctors are distributed all over the country in the following networks: Private doctors network (86 physicians); • Regional health care centres/clinics (98 physicians); • Social security institute health units network (44 physicians). • The diseases included in the system of illness observers at the fi rst degree health care centres are: whooping cough, measles, mumps, rubella, varicella, in fl uenza of infective etiology, respiratory infection with fever (>37.5 °C). A weekly report is done of the number of cases and patients. The report is done according to the clinical fi ndings and de fi nitions. Military and Civilian agencies' Contribution in Preparedness and Response Against Natural or Deliberate Health Emergencies in Greece All public sector services, in the case of a suspected or con fi rmed biological incident -deliberate or not -that needs to be treated, alert the Civil Protection Operations' Centre of GSCP. GSCP then activates the Crisis Management Team (CMT) which consists of representatives from Police, Fire Service, First Aid National Center (FANC) [ 7 ] , National Defence General Staff, Centre for Disease Control and Prevention (CDCP) and the GSCP itself. GSCP's representative coordinates the functions of the CMT through telephone or video conference. After the thorough evaluation of the severity of the incident and the classi fi cation with different color codes if necessary, CMT will conduct a meeting at the GSCP building for better coordination of the operation. When an initial estimation has been made, medical directorates in various/all regions of the country are informed and guidelines are issued. Medical directorates are obliged to report immediately to the GSCP about any laboratory result following citizens' examinations and inform the public according to the guidelines of GSCP. Different missions are given to Police and Fire Service depending on the incident's nature. If needed, National Defence General Staff contributes resources through its military hospitals, laboratories, mobile laboratories, medical personnel, services (mass vaccination) and equipment (direct supply of masks with fi lters against biological agents, personal protective suits, decontaminants, antidotes, drugs, mobile toilets, and decontamination facilities) or other supportive units (e.g. to clear or secure an area, for quick transportation or relocation of people, etc.). In case of a CBRN agent release, Hellenic National Defence General Staff activates its Special Joint CBRN Company which has the capability to be airborne and deploy anywhere in Greece within 4 h (maximum), to conduct a CBRN search, survey, identi fi cation, sampling, decontamination, and provide specialized fi rst aid. For bioterrorism agents, this company has the capability to operate portable biological detectors that can identify pathogens of special interest, such as those causing anthrax or plague, within 30 min (up to 28 biological samples can be processed simultaneously). The Platoon was established after the 2004 Olympic Games by merging the two specialized units (one fi eld unit operating in both hot and warm zones and one hospital-based unit deployed at the Army General Hospital of Athens) that were created and deployed during the Games in support of fi rst responders.
2019-05-13T13:06:47.540Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "d4ec2e2363bdbc9bae3a090b5f5874339f57df0a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3849fe326f85f6002706fb413e7e630ea43b7d36", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
238720289
pes2o/s2orc
v3-fos-license
MFCosface: A Masked-Face Recognition Algorithm Based on Large Margin Cosine Loss : The world today is being hit by COVID-19. As opposed to fingerprints and ID cards, facial recognition technology can effectively prevent the spread of viruses in public places because it does not require contact with specific sensors. However, people also need to wear masks when entering public places, and masks will greatly affect the accuracy of facial recognition. Accurately performing facial recognition while people wear masks is a great challenge. In order to solve the problem of low facial recognition accuracy with mask wearers during the COVID-19 epidemic, we propose a masked-face recognition algorithm based on large margin cosine loss (MFCosface). Due to insufficient masked-face data for training, we designed a masked-face image generation algorithm based on the detection of the detection of key facial features. The face is detected and aligned through a multi-task cascaded convolutional network; and then we detect the key features of the face and select the mask template for coverage according to the positional information of the key features. Finally, we generate the corresponding masked-face image. Through analysis of the masked-face images, we found that triplet loss is not applicable to our datasets, because the results of online triplet selection contain fewer mask changes, making it difficult for the model to learn the relationship between mask occlusion and feature mapping. We use a large margin cosine loss as the loss function for training, which can map all the feature samples in a feature space with a smaller intra-class distance and a larger inter-class distance. In order to make the model pay more attention to the area that is not covered by the mask, we designed an Att-inception module that combines the Inception-Resnet module and the convolutional block attention module, which increases the weight of any unoccluded area in the feature map, thereby enlarging the unoccluded area’s contribution to the identification process. Experiments on several masked-face datasets have proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition with masked subjects. Introduction As a convenient and fast method of identification, facial recognition technology has been widely used in the fields of public security, financial business, justice, and criminal investigation. Facial recognition technology extracts facial features for classification and recognition, and it is has been one of the hotspots of research in recent years [1]. The world today is being hit by COVID-19, which is an infectious virus that causes severe acute respiratory syndrome [2]. According to the CDC's instructions, the best ways to avoid infection or spread of the disease are to maintain social distance and wear a mask in public. The current identification methods based on ID cards and fingerprints require contact with specific sensors, and facial recognition technology can avoid this unnecessary contact to a certain extent, avoiding the spread of COVID-19. However, wearing a mask affects the extraction of facial features [3,4], leading to low recognition accuracy, so the algorithm research on masked-face recognition has great practical significance at the moment [5]. Generally speaking, mask occlusion leads to the obstruction of the feature structure of the face, and the most important problem in facial recognition with masks is how to effectively represent the face when the feature structure is obstructed [6,7]. At present, most scholars use sparse representation and local feature extraction to solve the problem. Sparse representation is to use as few training samples as possible to re-represent the test object, that is, to seek the sparsest representation of the test sample [8]. Wen et al. [9] proposed structured occlusion coding, which separates the occlusion and classifies at the same time through an additional occlusion dictionary. Wu and Ding [10] used a hierarchical, sparse, low-rank regression model and proposed a SRC-based, gradient direction hierarchical adaptive sparse low-rank (GD-HASLR) model. Dong et al. [11] proposed a hybrid model that combines robust sparsity constraints and low-rank constraints. The model can simultaneously deal with random errors caused by random noise and structural errors caused by occlusion. However, due to the large-area occluded by a mask, the identity information is severely affected; a sparse representation is difficult to effectively reconstruct, making its recognition rate with masked-face images low. The method based on local feature extraction mainly focuses on the relationships between local features and facial features [12]. Wang et al. [13] used high-dimensional local binary patterns to obtain local features of human faces, and used densely connected convolutional neural networks to extract human faces. Song et al. [14] proposed a robust facial recognition method for occlusion based on the pairwise differential siamese network, which explicitly establishes the relationship between some occluded facial section and the occluded feature. Shi et al. [12] decomposed feature embeddings, set different confidence values for the decomposed sub-embeddings, and aggregated the sub-embeddings for identity recognition. However, the feature extraction method mainly relies on artificially designed feature representations, so it has a low recognition rate in an unconstrained environment. Many scholars have also made many other attempts at masked-face recognition [15][16][17][18]. Xie et al. [19] proposed the robust nuclear norm to characterize the structural error and a new robust matrix regression model composed of RMR and S-RMR. Ejaz et al. [20] used PCA for feature extraction and dimensionality reduction. They calculated the average facial features of each identity and performed identity recognition. Xu et al. [21] proposed a Siamese convolutional neural network for facial recognition based on the Siamese convolutional neural network and the Inception module. In the absence of masked-face datasets, most methods only simulate mask occlusion by adding random noise or black pixels, which makes their abilities with real mask occlusion questionable. For the above problems, this paper proposes a masked-face recognition algorithm based on large margin cosine loss (MFCosface). It uses an algorithm based on the detection of key facial features to generate masked-face images as a training set; then it uses the large margin cosine loss to train the model; and finally, it adds an attention mechanism to the model to optimize the representations of facial features, which effectively solves the problem of low recognition rates with mask occlusion. In summary, our contributions are as follows: • We designed a masked-face image generation algorithm based on the detection of key facial features, and generate masked-face images from face datasets and mask templates. The images were used construct a dataset for training, which alleviated the problem of insufficient data. • A masked-face recognition algorithm based on large margin cosine loss is proposed. We analyzed the characteristics of the masked-face dataset, and proved the rationality of using the large margin cosine loss function. • Experiments on our artificial masked-face dataset and a real masked-face image dataset proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition in spite of mask occlusion. Face Alignment and the Detection of Key Facial Features Face alignment is an important preprocessing method in a facial recognition algorithm. By constraining the geometric parameters of the face to reduce the differences arising from facial posture, one can effectively improve the robustness of a facial recognition network to facial posture changes. The existing face alignment algorithms can be mainly divided into two categories: generative methods and discriminative methods [22]. Generative methods regard face alignment as an optimization problem of fitting the appearance of a face [23], and generate an aligned facial image by optimizing the shape and appearance parameters [24]. Discriminative methods train multiple key facial feature detectors and infer face information from these feature points. As the size of the dataset increases, discriminative methods have shown obvious advantages in training and alignment speed, and have become the preferred methods of face alignment-e.g., take the commonly used algorithms MTCNN [25] and LAB [26]. Key facial features are also called facial landmarks [27]. They include the eyebrows, eyes, nose, mouth, facial contours, etc. The detection of key facial features is a key step in the field of facial recognition and analysis, and it is the key to other face-related issues, such as automatic facial recognition, expression analysis, three-dimensional face reconstruction, and three-dimensional animation. Softmax Loss Function Using a deep convolutional neural network (DCNN) for feature extraction for face representation is the preferred method of facial recognition [28,29]. DCNN performs a mapping operation on face images in a feature space with a small intra-class distance and a large inter-class distance. Some scholars trained a classifier based on the softmax loss function to separate different identities in the training set to solve the facial recognition problem. The softmax loss function is shown in Equation (1). Among them, x i ∈ R d represents the feature of the i-th sample, which belongs to the class y i . W j ∈ R d represents the j column of weight W ∈ R d×n ; b j ∈ R n is the deviation term; and the batch size and category are N and n, respectively. The method based on the softmax loss function can be trained on large-scale training data and deep convolutional neural networks to obtain excellent facial recognition performance, but this method also has many shortcomings: 1. The size of the linear transformation matrix W ∈ R d×n increases linearly with the number of identities n. Since a W ∈ R d×n matrix is used to output n identity prediction probabilities, the number of identities n in the large-scale test set is usually very large, so the matrix W will show a linear increasing trend with n, which will reduce the training efficiency of the model. 2. In the open-set facial recognition problem, a face cannot be fully distinguished. If a face image that has not been trained with before appears in the model recognition process, this method cannot distinguish it well, because the final output of the model does not include the probability of the identity never having appeared. 3. The softmax loss function does not explicitly optimize the characteristics of images to increase the similarity of the samples within the class and the diversity of the samples between the classes, which leads to a large number of appearance changes within each class. Hence, the performance for facial recognition is poor. In order to solve these problems, many scholars have improved the softmax loss function. Wen et al. [30] proposed center loss, which can reduce the intra-class variance by optimizing the feature distance between the class center and the sample. Liu et al. [31] proposed the large margin softmax (L-softmax) loss function, which adds an angle constraint to each sample by increasing the margin, which effectively expands the distance between classes and compresses the distance within classes. A-softmax [32] adds multiple restrictions to the angle in the L-softmax loss function, normalizes the weights, and provides a good geometric explanation by constraining the learning features to be distinguishable on the hyperspherical manifold. Wang et al. [33] proposed large margin cosine loss (CosFace) to apply feature normalization and use the global scale factor s to replace the sample-related feature norm in A-softmax. CosFace converts the angular distance to the cosine distance to achieve the smallest intra-class variance and the largest inter-class variance possible, which effectively improves the recognition accuracy. Deng et al. [34] proposed the Arcface facial recognition model, which improved AM-softmax and replaced the cosine distance with the angular distance. This method improves the loss based on softmax, and a large number of experiments have proved that it has superior recognition accuracy rates in facial recognition. Attention Model The attention model (AM) was originally used in machine translation, and has now become an important concept in the field of neural networks [35][36][37]. The attention mechanism can be explained intuitively by using the human visual mechanism: we usually pay more attention to the specific things that attract our attention [38][39][40]. In deep network learning, the attention mechanism is shown to give higher weights to elements understood from intuition-that is, it allocates more resources to important parts, and allocates less resources to unimportant or bad parts. This is conducive to obtaining higher revenue from fixed computing resources [41]. Dataset Preprocessing Since deep learning models usually require large-scale datasets for training, the lack of masked-face datasets makes it difficult for a model to learn the feature mapping when a face is occluded by a mask, resulting in a poor recognition rate. To solve this problem, we used a masked-face image generation algorithm based on the detection of key facial features to construct a dataset. We generated realistic masked-face images for model training through face detection, the detection of key features, and mask coverage analysis. The process of masked-face image generation is shown in Figure 1. 1. Face detection: We used a multi-task cascaded convolutional neural network (MTCNN) [25] for preprocessing, and got an image containing only faces. The result is shown in Figure 1. MTCNN is mainly composed of three convolutional neural networks, cascaded. First our system involves resizing the image and generating image pyramids of different scales; then we send them to P-Net to generate many candidate frames containing faces or partial faces; then we filter out a large number of poor candidate frames through R-Net and perform regression on the candidate frames to optimize the prediction results; finally, we use O-Net to regress the features and output the positions of the key features. 2. Key feature detection: MTCNN can only detect the key points of the eyes, nose, left mouth, and right mouth, and it is difficult to generate a more realistic masked-face image using only 5 key points, so we used HOG features to detect 68 key points of the face [42]. This method is more detailed in the detection of key points. Note that we used the Dilb library to implement this process. 3. Mask coverage: To generate a mask, we calculate the distance and structure information based on the relative positions of the chin and the bridge of the nose, get the coordinates of the mask, use common mask templates (surgical mask, KN95, etc.) to cover the face, and generate the masked-face image. The result is shown in Figure 1. Anwar et al. [43] also used a method for generating masked-face images for model training, but their method needs to collect mask templates from different angles, which has significant limitations. Our method uses only a front image of the mask, analyzes the relative positions of the key points of the face, and distorts the mask to generate a more realistic masked-face image. Loss Function FaceNet is a facial recognition model proposed by the Google team [44] which mainly uses triplet loss for model training. A triplet is composed of three samples (x a , x p , x n ), where x a and x p are two face images with the same identity (positive pair), where x a and x n are two face images with different identities (negative pair). Assuming that the mappings of x a , x p , and x n in the feature space are, respectively, f (x a ), f (x p ), and f (x n ), in order to make the feature distance of the same identity image smaller than the feature distances of different identity images, the following inequality can be used: where α is the margin between the positive pair and negative pair. Through the above formula, the feature distance between the positive pair can be forced to be much smaller than the feature distance between the negative pair-that is, the mapping of the same identity in the feature space is closer, and the mapping of different identities is farther. Hence, the triplet loss is as follows: Since the network selects those valuable triplets for training as much as possible, the selection of two images with high similarity for the positive pair will make it difficult for the model to learn effective feature representation, and selecting the two most dissimilar images may lead to training collapse, so a semi-hard strategy is generally adopted. The semi-hard strategy selects two images with poor similarity to form a positive pair, and two images with higher similarity form a negative pair. Compared with extreme selection of images with the highest or lowest degree of difference to form triplet, this method is more balanced, and the model's iteration speed is faster. According to the characteristics of the masked-face dataset, ideally, triplets such as (Anchor, Positive1, Negative) and (Anchor, Positive3, Negative) in Figure 2 will be selected for training, because they contain more mask changes; this helps the model learn the relationships between mask changes and facial feature changes. Since the sample feature distances were L1 < L2 < L3 in the feature space, the model chose triplets such as (Anchor, Positive2, Negative) for training after adopting the semi-hard strategy, which contains fewer mask changes. Therefore, it is difficult for the model to extract the facial features occluded by a mask. In order to solve this problem, we used the large margin cosine loss function to train the model. The large margin cosine loss function replaces the selection of triplet training with a non-grouping learning method, and its expression is as follows: log e s·(cos(θ yi ,i)−m) e s·(cosθ yi ,i)−m) + ∑ n j =yi e s·cos(θ j ,i) subject to where N is the number of training samples, x i ∈ R d represents the feature vector of the i-th sample, and its identity label is y. W j ∈ R d is the weight of the class j. θ is the angle between W j and x i , S is the scaling factor, and m is the margin of the angle, used to limit the distance between classes. The large margin cosine loss function effectively solves the problem of insufficient mask changes in the model training process; all masked-face images are used for model training. Figure 3 shows its representation in the feature space; α and β, respectively, represent the angles between sample x and W1 and W2. For each sample x belonging to class X, cosβ − cosα ≥ m should be satisfied, and the margin m also increases the inter-class difference while further compressing the intra-class distance. Experiments have also proved that this method greatly improves the recognition accuracy. Attention Model According to the characteristics of masked-face images, it can be known that most of the features are unavailable after the mask is put on [45]. If the model still focuses on the features of the global image, those effective features will be ignored. By adding a convolutional block attention module (CBAM) attention mechanism to the network, the model can focus on those truly effective image features, that is, the features of the areas that are not covered by the mask. The convolutional block attention module is an attention mechanism based on convolutional neural networks [38] which mainly includes a channel attention module and a spatial attention module. By calculating the feature information of these two modules, an attention mapping is generated; then the attention mapping and the feature mapping are multiplied element-wise to obtain the output features. For the output F ∈ R C×H×W of any convolutional layer, CBAM could generate a 1-dimensional channel attention mapping M C ∈ R C×1×1 and a 2-dimensional spatial attention mapping M S ∈ R 1×H×W , as in Equations (6) and (7), where ⊗ is the element-wise multiplication and F is the final output feature mapping. The channel attention module performs average-pooling and max-pooling on the input feature map to obtain the average-pooling feature F C AVG and the max-pooling feature F C MAX . These two features are input into a shared multi-layer perceptron (Shared MLP) to perform the channel information aggregation to generate channel attention mapping M C ∈ R C×1×1 , as shown in Equation (8), where σ is the sigmoid activation function. Figure 4 shows the process of channel attention mapping. M C (F) = σ(MLP(AvgPool(F) + MLP(MaxPool(F))) The spatial attention module multiplies the channel attention mapping M C (F) as input, and performs channel-based average-pooling and max-pooling on it to obtain the average-pooling feature F S AVG and the max-pooling pooling feature F S MAX . The obtained features are concatenated and convolved to obtain a feature mapping of dimension 1, and the final feature mapping is obtained after sigmoid function activation, as shown in Equation (9), which f 7×7 represents a convolution operation with the filter size of 7 × 7 , represented by Conv layers in the Figure 5. Figure 5 shows the process of spatial attention module mapping. Network Structure We used Inception-ResNet-v1 as the basic network. The network structure is shown in Figure 6, and for the specific structure one can refer to [46]. Inception-ResNet-v1 is mainly composed of a reduction module and an Inception-ResNet module. The reduction module uses a parallel structure to extract the features while reducing the size of the feature map. The Inception-ResNet module replaces the pooling operation via the residual connection, and cancels the size transformation of the feature map. The model refers to the idea of multi-scale methods, uses convolution kernels of different sizes to increase the receptive field of the model, and fuses features from multiple scales. We used the Att-Inception module to replace the Inception-ResNet module in the network. The Att-Inception module integrates the CBAM attention mechanism on the basis of the Inception-ResNet module, which can make the model focus more on the effective features of the image. Figure 7 is the model structure diagram of our model, in which the modules of the same size output the same dimension feature map. Figure 6. After the face is covered with a mask, through the detection of the key facial features we proposed, the image is sent to a model composed of Att-inception modules that incorporate the attention mechanism. Training the model through large margin cosine loss L can increase the intra-class differences while reducing the inter-class differences. Datasets and Evaluation Criteria We conducted experiments on five face datasets, VGGFace2_m, LFW_m, CASIA-FaceV5_m, MFR2, and RMFD. VGG-Face2_m was used for model training, and the remaining datasets were used for testing. We divided the datasets into two types: generated masked-face datasets and real masked-face datasets. VGGFace2_m, LFW_m, and CASIA-FaceV5_m are generated masked-face datasets, including the original images and the masked-face images generated by our method; MFR2 and RMFD are the real masked-face datasets. Examples of the datasets are shown in Figure 8. VGG-Face2_m was generated from the VGG-Face2 face dataset. VGG-Face2 contains a large number of scenes, lighting settings, and ethnicities. It contains about 3.3 million pictures and 9131 identities. We used 8335 identities in the VGG-Face2 training set and randomly selected 40 pictures from each identity to form VGGFace2_mini. Note that we only used one-tenth of the original dataset. We used our method to generate the corresponding masked-face dataset, and mixed it with VGGFace2_mini to form VGGFace2_m. LFW_m was generated with the LFW dataset. LFW is currently the most commonly used dataset for facial recognition, having a total of 13,233 face images and 5749 identities. The face images are all photos from real life scenarios, which have high test difficulty. On the LFW dataset we used the same method to generate masked-face images and mixed them with the original images to generate the LFW_m dataset. CASIA-FaceV5_m was generated from the CASIA-FaceV5 dataset, referred to as CF_m. We use the same method to generate CF_m as the test set. CASIA-FaceV5 contains images of 500 people, and 5 for each person-2500 images in total. All pictures are of Asian faces. MFR2 is a small dataset containing 53 celebrity and politician identities. The dataset contains unmasked and masked images. There are a total of 269 images, and each identity has an average of about five pictures. This dataset contains more than just the common surgical mask and KN95 mask. It also contains masks with strange patterns. The RMFD dataset was collected and created by scholars of Wuhan University during the COVID-19 pandemic. It contains 525 objects, 90,000 unmasked-face images, and 2000 masked-face images. As it relies on the network to collect images, the dataset contains a large number of identity errors, duplicate images, and images in which it is too blurry to identify anyone's identity. We manually cleaned the dataset and used the 85,000 pictures obtained after cleaning as the test data. The experiment in this paper adopted the LFW dataset test method. Except for MFR2, which only used 400 pairs of images due to the small number of images, the other test sets randomly selected 6000 pairs of images as the test data, of which 3000 pairs had the same identities and the other 3000 pairs were from different people. Judging whether these images were of the same person or different people is the recognition result. We used the 10-fold cross-validation method to test the model, divided the test data into 10 randomly, selected 9 of the portions in turn as the training data, and used the rest as the test data. We repeated the training 10 times, and used the average of the 10 test results as the recognition accuracy. Experimental Configuration The experimental platform operating system was Ubuntu 18.04.2, the GPU was a single Tesla V100 with 32 GB memory, and we set the batch size to 90. Through experiments, it was found that the model basically converged at 150 iterations, so we set the number of iterations to 200, all experiments did not use the pre-training model. The input image size was 160 × 160 and the input data was standardized. The output feature vector dimensions was 512; the dropout parameter was set to 0.4. Both training data and test data used random flip to prevent overfitting. We used common surgical masks (blue, white, green), KN95 masksm and black masks, a total of five types of masks, as mask templates for the experiments, as shown in Figure 8. Experimental Results We tested the model on the generated datasets and the real datasets. For its training set, only our method used the VGGFace2_m dataset; and other methods used VGG-Face2_mini as their training set. The test results of the generated dataset are shown in Table 1. It can be seen that our method greatly improved on the generated datasets compared with the original method FaceNet, and the accuracy rate on the LFW_m dataset reached 99.33%. Since most of the data in the VGGFace2 dataset are European and American faces, the test accuracy on the Asian dataset CF_m was not as good as that in the LFW_m dataset, at only 97.03%. The test results of the real dataset are shown in Table 2. Our method increased the accuracy of the original network Facenet from 84.25% to 98.50%. Since our method has a great advantage in masked-face image recognition, and RMFD only contains 5% maskedface images, it did not achieve the best performance in this dataset. However, our method was only 0.13% worse than the most advanced method Arcface. It can be seen that our method was also greatly improved on the real masked-face dataset. Experimental results shows that our method has better recognition performance than other methods, and it can complete the facial recognition task in the mask occlusion state. Ablation Experiment In order to prove the effectiveness of MFCosface, ablation experiments were performed on the generated dataset LFW_m and the real dataset MFR2. According to the experimental results of LFW_m in Table 3, the corresponding ROC (receiver operating characteristic curve) was drawn, as shown in Figure 9. The ROC graph uses the false positive rate and the true positive rate as the coordinate axes, reflecting the relationship between them, and can better represent the model's recognition ability. AUC (area under the curve) is the area under the ROC curve. The higher the AUC value, the stronger the classification ability of the model. It can be seen from the experimental results that the accuracy was improved regardless of whether a single method or a combination of multiple methods was used. The highest accuracy was obtained when the three methods were used at the same time. In terms of ROC, the MFCosface method was also significantly better than the other methods, and the addition of each method improved the performance of the model. It is worth noting that in our Cosface + CBAM experiment, the combination of these two methods was not as good as Cosface. This is because the training dataset did not contain masked-face images. At this time, the attention mechanism made the model focus on the entire face, not areas that would not be occluded by the mask. This had an adverse effect; that is, when the face is occluded by the mask, the extracted facial features are severely obscured, resulting in a decrease in the recognition accuracy. The masked method solves the problem of insufficient data, the Cosface method optimizes the distribution of the feature space based on the data generated by the masked method, and CBAM makes the model more capable of learning useful features and is committed to solving hard samples. These three methods complement each other, making the model have stronger masked facial recognition capabilities. 9. The receiver operating characteristic curve. Facenet represents the original network, masked represents the use of images generated by our method for training, Cosface represents the use of large margin cosine loss as the loss function, CBAM means training with an attention mechanism, and MFCosface represents the method we proposed. Noise Experiment The key to solving the problem of masked-face recognition is to ignore the invalid features of the mask occlusion area and pay more attention to the effective features. In order to prove that our method pays more attention to the unoccluded facial area (the upper part of the face) during feature extraction, a noise experiment was designed. First, each entire image was divided into an upper part and a lower part according to the key points at the bridge of the nose; and then salt and pepper noise, Gaussian noise, and random noise were added for recognition. The results are shown in Table 4. The noise part "Up" represents the upper area of the face. "Down" represents the lower area of the face. "All" represents adding noise to the entire image. The addition of noise destroyed the facial feature information and caused a decrease in accuracy to a certain extent. In the experiment, MFCosface was compared with the original method FaceNet. It can be seen that on different datasets with different noises, our methods were much better than FaceNet and showed strong robustness. The recognition accuracy after adding noise to the lower half of the face was high, which shows that our method can still extract reliable facial features for recognition after the features in the lower half are destroyed. This proves that our method pays more attention to the facial features in the upper half of the area, and is less dependent on the facial features in the lower half. In summary, our method pays more attention to the upper half of the face that is not covered by the mask, and has strong robustness in the face of noise. Attention Mechanism Experiment The basic network Inception-ResNet-v1 is mainly composed of the Reduction module and the Inception-ResNet module. We set up the attention mechanism experiment based on different modules-that is, we added the CBAM attention mechanism to different modules. Table 5 shows the experimental results, where CBAM_Reduction represents CBAM being added to the Reduction module. The network structure is shown in Figure 10a (take the Reduction-A module as an example). V represents the padding mode as valid; Att-Inception means that CBAM was added to the Inception-ResNet module. The network structure is shown in the Figure 10b (take the Inception-ResNet-A module as an example). CBAM_All is where CBAM was added to all modules. According to the experimental results, the Att-Inception module constructed in this paper is significantly better than the other two modules. On the CF_m dataset, our recognition accuracy was not the highest, but it was only 0.14% worse than the highest-a relatively high performance. Experiments on several datasets proved that our method performs better in most cases and achieves higher recognition accuracy. Real-World Experiment In order to verify the accuracy of our method in masked-face recognition in real situations, we collected masked-face images from 137 students in the laboratory to form a dataset for testing. We took five unmasked images of each subject and five images of him/her while wearing a mask. The dataset contains changes in expression, light, angle, etc. We used five unmasked-face images and five corresponding images generated by our method as the training set, and the original dataset as the test set. The recognition accuracy was 98.54%. At the same time, we used the camera to collect data for recognition. The recognition result is shown in Figure 11. It can be seen from the recognition results that our method can recognize the identity of the person wearing a mask in a real situation, showing facial recognition ability. Figure 11. Real-world results. Conclusions In this paper, we proposed a masked-face recognition algorithm based on large margin cosine loss, which has high recognition accuracy. To address the problem of insufficient masked-face images, we used the detection of key facial features to cover face images with common mask templates to generate corresponding datasets. Through the analysis of the masked-face dataset, we found that triplet loss is not applicable to our dataset, and we used large margin cosine loss to train the model. Since the mask destroys some of the facial feature information, we added an attention mechanism to make the model focus on effective regions to extract more important feature information. Through experiments on generated masked-face datasets and real masked-face image datasets, it was proven that our method is superior to the other existing methods. Finally, a real-world experiment was undertaken that simulated a real situation, and the results show that our method performs masked-face recognition with high accuracy. In the future, we will explore how to combine semantic information to generate more realistic masked-face images to solve the problem of insufficient data. Like most algorithms, our method suffers from performance degradation when encountering extreme posture and expression changes, and we will focus on solving this problem in future research. Our method can also be extended to relatively regular occlusion objects, such as sunglasses and scarves.
2021-08-25T13:13:50.581Z
2021-08-09T00:00:00.000
{ "year": 2021, "sha1": "404244c55e51aae5639013626e4dae840d6a66f5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/16/7310/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a728c87d104c88538468771ca97057b7814ad173", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
210827796
pes2o/s2orc
v3-fos-license
Expression of Serum Exosomal miRNA 122 and Lipoprotein Levels in Dogs Naturally Infected by Leishmania infantum: A Preliminary Study Simple Summary The immunopathogenesis of leishmaniasis is not completely understood. Exosomes are extracellular vesicles produced by most eukaryotic cells, containing various molecular constituents with biological effects (e.g., proteins, peptides, RNA). They play an important role in cell-to-cell signaling. Recently, exosomal microRNA were demonstrated to be able to regulate gene expression and protein production in mammalian cells, serving as potential biomarkers of disease. The microRNA miR-122 is a biomarker of hepatic damage widely studied in mice in the course of Leishmania infection. Leishmania organisms can interfere with miR-122 production leading to a dysfunction in cholesterol metabolism ensuring its proliferation in the infected host. In this study, we suggest that such a phenomenon may also occur in dogs affected by Leishmania infection. Abstract Current knowledge on the role of exosomal microRNA (miRNA) in canine leishmaniasis (CL), with particular regards to the interaction between miR-122 and lipid alterations, is limited. The aim of this study was to isolate/characterize exosomes in canine serum and evaluate the expression of miR-122 in ten healthy and ten leishmaniotic dogs. Serum exosomes were isolated using a polymer-based kit, ExoQuick® and characterized by flow cytometry and transmission electron microscopy, whereas miR-122-5p expression was evaluated by quantitative reverse-transcriptase polymerase chain reaction. A significant decreased expression of exosomal miR-122-5p, decreased serum levels of high-density lipoproteins, and increased serum levels of low-density lipoproteins were seen in leishmaniotic dogs when compared with healthy dogs. These results suggest that hepatic dysfunctions induced by the parasite interfere with lipoprotein status. The decreased expression of exosomal miR122 represents an additional effect of Leishmania infection in dogs as in people. Introduction Leishmaniasis is a zoonosis caused by intracellular protozoa of the genus Leishmania transmitted by phlebotomines. During the initial phase of the infection, Leishmania spp. can survive within the Kupffer cells without affecting the hepatic parenchyma [1]. A high tolerability of such cells to Leishmania spp. promotes a parasite survival in the canine liver leading to a perturbation of liver function and, in particular, cholesterol and lipoprotein metabolism [2,3]. In fact, Leishmania parasites are able to modulate the expression of genes associated with cholesterol biosynthesis, uptake, and efflux [2,4]. Cholesterol plays an important role in Leishmania infection since amastigotes are not able to synthesize it de novo [5], however, the mechanistic links between Leishmania infection and lipid changes are complex, multifactorial, and not completely understood. Important differences between promastigotes and amastigotes of Leshmania chagasi have been observed regarding uptake through lipid rafts, subdomains of the plasma membrane that contain high concentrations of cholesterol and glycosphingolipids. A transient disruption of lipid rafts in cell membranes affected promastigote uptake, but not amastigote uptake by macrophages. These findings indicate a difference in the needs of Leishmania parasites regarding both the availability and origin of cholesterol. Leishmania protozoa can alter the metabolism of cholesterol directly or through the effect on lipoproteins; trypanosomatids are able to acquire cholesterol from low-density lipoproteins (LDLs) and high-density lipoproteins (HDLs) by endocytosis [6][7][8]. As in people, Ghosh et al. [9] showed that an inverse association between blood levels of cholesterol and susceptibility to Leishmania donovani infection was present in mice. Contrarily, in leishmaniotic dogs, while hyper/normal cholesterolemia has been detected, high levels of low-density lipoproteins (LDLs) and low levels of high-density lipoproteins (HDLs) have been reported [10][11][12]. Recently, microRNAs (miRNAs) have been used to investigate both lipid metabolism and function in animals [13]. miRNAs are small, 20-22 nucleotides long, posttranscriptional regulators identified in tissues and blood in healthy and diseased people and dogs [14,15]. They act on mRNA primarily as inhibitors (translational repression or degradation) affecting several physiological processes [13]. While in circulation, serum miRNAs are highly degradable, however, when transported in microvesicles (exosomes) these molecules are more stable and can serve as reliable diagnostic biomarkers in diseased patients [16][17][18]. Exosomes being small extracellular mycelial vesicles [19] protect RNA from RNAse degradation [20]. In 2013, Ghosh et al. [21] explored, for the first time, the role played by exosomes in miR-122 expression, the most common miRNA present in the liver tissue, in L. donovani infection in mice. The authors showed that, the glycoprotein gp63, present in Leishmania exosomes, was able to degrade Dicer1 in the hosts' hepatic cells, reducing the synthesis of miR-122. Considering these premises, the aim of this study was twofold: evaluate the expression of serum exosomal miR-122 and the lipoprotein profile in dogs naturally infected by Leishmania infantum. Animals Ten mixed breed dogs, naturally infected by L. infantum, and ten mixed breed healthy dogs were recruited in the present study. The diagnosis of CL was based on compatible clinical signs and confirmed by visualization of amastigotes in lymph nodal aspirates and serologically by a positive indirect fluorescent antibody test (IFAT) greater than 1:160 [22,23]. All dogs were also tested for presence of Dirofilaria immitis, Anaplasma phagocytophylum, Borrelia burgdorferi, and Ehrlichia canis antibodies using SNAP ® test (Canine SNAP 4Dx, IDEXX laboratories). In order to be enrolled, the dogs with leishmaniasis had to be untreated at the moment of diagnosis and negative to the SNAP test. The healthy dogs had to be clinically healthy, negative to IFAT (<1:40) [22,23] and the SNAP test. Samples Collection and Hemato-Biochemical Analysis Ten mL of peripheral blood were collected from the jugular vein of each dog and put into tubes without anticoagulant (5 mL) and in tubes containing ethylene diamine tetraacetic acid (EDTA) (5 mL). A complete blood cell count was performed within 30 min from the collection using a semi-automatic cell counter (Genius S; SEAC Radom Group, Florence, Italy). Serum was also collected after centrifugation at 327× g for 15 min and it was stored at −20 • C. Serum urea, creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), bilirubin, alkaline phosphatase (ALP), and total protein (TP) were analyzed using commercially available kits (Reactivos Spinreact S.A. OLOT, Gerona, Spain). Total serum cholesterol, triglycerides, and high-density lipoprotein cholesterol (HDL) were measured using a Dimension EXL analyzer (Siemens Healthcare Diagnostics s.r.l., Milan, Italy); low-density lipoprotein cholesterol (LDL) was calculated using the Friedewald equation [24]. Exosomes Isolation and Mirna Detection Exosomes were extracted from the serum using a polymer-based kit, ExoQuick ® (System Biosciences Mountain View, Palo Alto, CA, USA) according to a previous study [17]. Exosomes were analyzed by flow cytometry (FC) and characterized by transmission electron microscopy (TEM). Dynamic light scattering and zeta potential determinations were also performed with a Nano ZS 90 (Appendix A). Isolated exosomes were processed for miRNA isolation using a commercially available kit (exoRNeasy Serum Plasma Kit; Qiagen, Hilden, Germany). Subsequently, the cDNA was amplified by quantitative reverse-transcriptase polymerase chain reaction (qRT-PCR) following the manufacturer's instructions (Appendix A). Statistical Analysis The data were tested for normal distribution using the Kolmogorov-Smirnov test (alpha = 0.05). The unpaired two samples Student's t-test or Mann-Whitney test was performed to evaluate the behavior of each data variable between the two groups (healthy vs. CL). All statistical comparisons were performed using the GraphPad Prism6 Software (GraphPad Software Inc., La Jolla, CA, USA). A p < 0.05 was considered statistically significant. Clinical Examination and Blood Tests The median age at the moment of enrollment was four years (range: 1-6) for the healthy group and four years (range: 1-8) for the CL group. The mean body weight was 22.3 ± 5.4 kg and 20.4 ± 4.3 kg for healthy and CL group, respectively. There were four males and six females (three spayed) in the healthy group, whereas five males (one castrated) and five females (two spayed) were present in the CL group. There were no differences in age (Mann-Whitney; p = 0.37), weight (t-test; p = 0.39), or sex (Fisher's exact; p = 1) between the two groups. The more frequent clinical signs observed in the CL group were lymphadenopathy (80%), weight loss (70%), skin lesions (70%), and splenomegaly (30%). The skin lesions included seborrhea sicca (5) and alopecia (2). The results of hematological and biochemical tests are presented in Table 1. In particular, 70% of affected dogs had a non-regenerative normocytic normochromic anemia. In addition, CL dogs had a significant reduction in total red blood cells (p = 0.01), hematocrit (p = 0.0009), hemoglobin (p = 0.0001), mean corpuscular volume (MCV, p = 0.008), mean corpuscular hemoglobin (MCH, p < 0.0001), and mean corpuscular hemoglobin concentration (MCHC, p < 0.0001). The biochemical parameters were also altered in the CL group compared to the healthy dogs. In particular, levels of TPs (p = 0.0003) and LDLs (p = 0.01) were significantly increased, whereas the level of HDLs was significantly decreased (p < 0.0001). Exosomes Isolation and miRNA Detection Serum exosomes were detected as round vesicles of heterogeneous sizes via negative stain observed by TEM (Figure 1). The size determination was further investigated using a Zetasizer Nano resolved in an average size of 131 ± 4 nm with a Z potential of −27 ± 0.5 mV. Fluorescein isothiocyanate (FITC) positive singlets were 99.5%, 100%, and 94.4% for CD63, CD9, and CD81, respectively ( Figure 2). A total of 12 ng/μL of miRNA was isolated from serum exosomes. Using qRT-PCR, both miR-122 and RNU6-2, with Ct values of 35.3 ± 0.4 and 32.5 ± 0.6 respectively, were detected. When exosomal miR122 levels were compared between healthy and leishmaniotic dogs, a significantly lower (p = 0.004) expression was seen in the latter group (Figure 3). The size determination was further investigated using a Zetasizer Nano resolved in an average size of 131 ± 4 nm with a Z potential of −27 ± 0.5 mV. Fluorescein isothiocyanate (FITC) positive singlets were 99.5%, 100%, and 94.4% for CD63, CD9, and CD81, respectively ( Figure 2). The size determination was further investigated using a Zetasizer Nano resolved in an average size of 131 ± 4 nm with a Z potential of −27 ± 0.5 mV. Fluorescein isothiocyanate (FITC) positive singlets were 99.5%, 100%, and 94.4% for CD63, CD9, and CD81, respectively ( Figure 2). A total of 12 ng/μL of miRNA was isolated from serum exosomes. Using qRT-PCR, both miR-122 and RNU6-2, with Ct values of 35.3 ± 0.4 and 32.5 ± 0.6 respectively, were detected. When exosomal miR122 levels were compared between healthy and leishmaniotic dogs, a significantly lower (p = 0.004) expression was seen in the latter group (Figure 3). A total of 12 ng/µL of miRNA was isolated from serum exosomes. Using qRT-PCR, both miR-122 and RNU6-2, with Ct values of 35.3 ± 0.4 and 32.5 ± 0.6 respectively, were detected. When exosomal miR122 levels were compared between healthy and leishmaniotic dogs, a significantly lower (p = 0.004) expression was seen in the latter group (Figure 3). Discussion Although in the present study, a significant modification of the level of serum cholesterol was not present, a significant alteration of serum LDL and HDL levels were seen in the CL group, in agreement with previous studies [10][11][12]21]. Such alterations may suggest a lipid perturbation associated with Leishmania infection. These data are also in agreement with Carvalho et al. [28] showing that people with clinical manifestation of visceral leishmaniasis have high triacylglycerol and very-low-density lipoprotein (VLDL) levels, but low HDL levels. Different mechanisms may be implicated in the reduction of HDL levels during Leishmania infection; these may include decreased hepatic synthesis and secretion of apolipoproteins [29], increased endothelial lipase activity [30], and displacement of apoA-I by serum amyloid A [31]. In addition to their primary role in lipid transport, HDLs have also been associated with anti-inflammatory and anti-oxidant activity, vascular endothelial cell activation, nitric oxide (NO) production, expression of inflammatory mediators, and endothelial progenitor cell proliferation [32][33][34][35]. A reduction of HDL levels could represent a mechanism of defense that the protozoa uses to contrast the leishmanicidal activity of NO in infected macrophages [36]. In a recent study, Rodrigues Santos et al. [37] showed that human monocytes, experimentally infected by L. infantum, had two times higher parasitism in the presence of VLDL and HDL than when these lipoproteins were absent. This is the first study in leishmaniotic dogs showing higher levels of serum exosomal miR-122, a microRNA recently indicated as a good candidate marker for liver diseases in the absence of liverspecific biochemical markers [15]. In leishmaniotic dogs, liver damage can be present with or without specific clinical signs as well as with a low or high parasitic burden. Indeed, liver granulomas (effector T cells, macrophage/dendritic cell) have been described in asymptomatic dogs with low parasite burdens while not organized granulomas were detected in the liver of symptomatic dogs with high parasite burdens [38]. In this study, liver biopsies were not performed (absence of increased cytopathic markers of liver toxicity, e.g., ALT), however, a lower level of albumin and exosomal miR-122 in the absence of renal and enteric signs, suggests liver dysfunction rather than liver damage. This dysfunction associated with a reduction of circulating miR-122 in leishmaniotic dogs may lead Discussion Although in the present study, a significant modification of the level of serum cholesterol was not present, a significant alteration of serum LDL and HDL levels were seen in the CL group, in agreement with previous studies [10][11][12]21]. Such alterations may suggest a lipid perturbation associated with Leishmania infection. These data are also in agreement with Carvalho et al. [28] showing that people with clinical manifestation of visceral leishmaniasis have high triacylglycerol and very-low-density lipoprotein (VLDL) levels, but low HDL levels. Different mechanisms may be implicated in the reduction of HDL levels during Leishmania infection; these may include decreased hepatic synthesis and secretion of apolipoproteins [29], increased endothelial lipase activity [30], and displacement of apoA-I by serum amyloid A [31]. In addition to their primary role in lipid transport, HDLs have also been associated with anti-inflammatory and anti-oxidant activity, vascular endothelial cell activation, nitric oxide (NO) production, expression of inflammatory mediators, and endothelial progenitor cell proliferation [32][33][34][35]. A reduction of HDL levels could represent a mechanism of defense that the protozoa uses to contrast the leishmanicidal activity of NO in infected macrophages [36]. In a recent study, Rodrigues Santos et al. [37] showed that human monocytes, experimentally infected by L. infantum, had two times higher parasitism in the presence of VLDL and HDL than when these lipoproteins were absent. This is the first study in leishmaniotic dogs showing higher levels of serum exosomal miR-122, a microRNA recently indicated as a good candidate marker for liver diseases in the absence of liver-specific biochemical markers [15]. In leishmaniotic dogs, liver damage can be present with or without specific clinical signs as well as with a low or high parasitic burden. Indeed, liver granulomas (effector T cells, macrophage/dendritic cell) have been described in asymptomatic dogs with low parasite burdens while not organized granulomas were detected in the liver of symptomatic dogs with high parasite burdens [38]. In this study, liver biopsies were not performed (absence of increased cytopathic markers of liver toxicity, e.g., ALT), however, a lower level of albumin and exosomal miR-122 in the absence of renal and enteric signs, suggests liver dysfunction rather than liver damage. This dysfunction associated with a reduction of circulating miR-122 in leishmaniotic dogs may lead to the hypothesis that, like in mice [21], Leishmania parasites may play a potential role in the regulation of specific miRNA through the gp63 in dogs. Future research should consider enrolling a significantly higher number of dogs naturally infected by L. infantum in order to study the connection of exosomal miR-122 obtained both from serum and from liver biopsies with the levels of circulant gp63. Conclusions In summary, the results of the present study suggest that alterations of the lipid metabolism, low HDL and high LDL serum levels along with a lower miR-122 expression may indirectly mirror hepatic alterations induced by L. infantum in dogs. However, because of the low number of animals enrolled, further studies are warranted to better define the role of miR-122 as a potential biomarker of hepatic damage/disfunction during canine leishmaniasis. Conflicts of Interest: The authors declare no conflicts of interest. Appendix A. Appendix A. 1 . Ethical and Regulatory Approval This study was approved by the Organism Proposed to Animal Welfare control (OPBA) at the investigators' institution. Appendix A.2. Exosomes Isolation Exosomes from all the blood samples collected (healthy and affected dogs) were treated with RNase A at 37 • C for 10 min (100 ng/ mL, Qiagen, Hilden, Germany) and then exosomes were isolated using a polymer-based kit, ExoQuick ® (System Biosciences, Palo Alto, CA, USA) according to the manufacturer's protocol. Briefly, 250 µL of serum were centrifuged for 15 min at 3000× g to remove cell debris. The supernatant was transferred to a sterile tube and 63 µL of ExoQuick ® precipitation solution was added. After brief vortexing, the sample was incubated for 30 min at 4 • C, and then centrifuged at 1500× g for another 30 min at room temperature. After removing the supernatant, the pellet was re-suspended in 1/10 of the original volume using nuclease-free water. Appendix A.3. Characterization After isolation, the serum exosome pellets were re-suspended in 1 mL of phosphate buffer solution (PBS) at pH 7.4 at a protein concentration of 1 µg/µL. Then 500 µL of the exosome suspension were labeled with 50 µL of 10× Exo-FITC for 10 min at 37 • C. The exosomes were then re-isolated, using an additional 100 µL of ExoQuick ® and incubated for 30 min on ice. Finally, the labeled exosome pellets were re-suspended in 500 µL of PBS and ready to be labeled with specific anti-canine antibodies on magnetic beads. In particular, anti-CD63-, CD9-, and CD81-coupled magnetic beads from SBI's Exo-Flow IP ® kit (System Biosciences, Palo Alto, CA, USA), used at 1:3 dilution, were used for exosomal characterization as previously described [39]. Briefly, 50 µL of magnetic beads were incubated with 100 µL of the labeled exosomes overnight at 4 • C on an agitator. The following day, the beads/labeled exosomes were placed on a magnetic plate for 5 min, washed with 100 µL of 1× wash buffer once and analyzed by flow cytometry (FACSCalibur, BD Biosciences, San Jose, CA, USA).
2020-01-16T09:04:49.998Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "871fabfdcebc94e61ac8d3e987cc90f571535c23", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/10/1/100/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca2db66faf0fd65556d5c5d386d8366f38ff6d23", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
73687936
pes2o/s2orc
v3-fos-license
Determination of the Peak and Residual Shear Strengths of the Sandwich Material in Slopes The mudded weak interlayer is a geotechnical sandwich material exhibiting strain softening behavior, which plays an important part in the slope stability.The present work primarily focuses on the shear strength of themudded weak interlayer in rock slopes. To determine the peak and residual shear strengths of the mudded weak interlayers, the particle flow code (PFC) is used to simulate the failure behavior during the direct shear tests. Laboratory investigations including uniaxial compression test, SEM, and 3D deformation measurement are employed to calibrate the essential micro parameters of the mudded weak interlayer during the simulation process in PFC.The numerical model is built based on these parameters and both the peak and residual shear strengths can be predicted from the model.The prediction results show that the peak and residual internal friction angle are 19.36 and 14.61, while the peak and residual cohesion are 22.33 kPa and 2.73 kPa, respectively. Moreover, to validate the obtained peak and residual strengths, the results are compared with literature data. The peak and residual shear strengths of the mudded weak interlayer can serve as an important benchmark to evaluate the stability of side slopes and provide guiding suggestions for their reinforcement. Introduction The mudded weak interlayers are thin and weak rock strata in slopes, which are formed by tectonism and weathering.Because the shear strength of these weak interlayers is lower than that of the other layers in slopes, they are considered to be the most dangerous potential slip surfaces [1][2][3][4].Thus, the methods for the evaluation of the shear strength of the mudded weak interlayers have been the focus of research [5].The dynamic change of the shear strength of the mudded interlayers from the peak to the residual was often neglected in the traditional analysis of the rock slope stability [6].However, with the increase of shear deformation, strain softening will occur in the mudded weak interlayers [7][8][9].In view of the importance of studying the shear behavior of the mudded weak interlayers in slope stability evaluation, it seems essential to develop a method that reasonably predicts the peak and residual strength of the mudded weak interlayers. Extensive attention has been paid to the estimation of the shear properties of the mudded weak interlayer in recent years [10,11].More specifically, a thorough survey shows that the existing studies mainly concentrate on two aspects, the general shear properties and the strain softening characteristics. Firstly, soil tests, including the large ring shear tests and uniaxial and triaxial compression tests, were employed to investigate the general shear properties of the mudded weak interlayers.Li et al. [12] investigated the mechanical behavior of bedded rock salt containing an inclined interlayer using experimental techniques including uniaxial and triaxial compression tests.The research demonstrated that the inclined interlayer has an effect on the deformation and fracture of rock salt [12].Xu et al. [13] developed a simple shear strength model for the interlayer shear weakness zone based on the available experimental data, which is capable of describing the shear behavior of both the interlayer soil and the soil/rock interface.Li et al. [14] studied the shear properties of slip zone soils from three giant landslides in the Three George Project area using the large ring shear tests at different shear rates (0.1, 1, and 10 mm/s).According to the test results, the relations 2 Advances in Materials Science and Engineering between the Atterberg limits, particle size distribution, and shear properties were investigated.Chen et al. [15] studied the physical and mechanical properties of the mudded weak interlayer in a natural slip zone of red beds in southwestern China.It was found that the mudded weak interlayer of the progressive landslide is composed of fine-grained soils with a large amount of clay particles and the mudded weak interlayers are overconsolidated. The study on the strain softening characteristics seems more popular than that of the general shear properties of the mudded weak interlayers.Terzaghi et al. [17] first presented the concept of strain softening and concluded that the fissured clay has a higher shear strength when the shear strain value is at a low level.Based on direct shear tests of overconsolidated soils, Skempton [18,19] found that the shear strength of the overconsolidated soils will decrease when the shear strain exceeds a certain value.Using an extended Mohr-Coulomb constitutive model to represent the strain softening behavior of slope material, Mohammadi and Taiebat [7] evaluated the postfailure deformation of slopes and embankments with a numerical method based on the updated Lagrangian formulation.The results showed that the postfailure deformation of the slopes is a function of the strength reduction rate and the stiffness of the slope material.Conte et al. [8] also presented a numerical approach to analyze the stability of slopes in soils with strain softening behavior.The strain softening behavior of the soil was simulated by reducing the strength parameters with the increasing deviatoric plastic strain.Chen et al. [15] reported that the mudded weak interlayer exhibits a strain softening behavior, based on the shear stress-strain curves and that the residual shear strength is closely related to the water content.The microstructure of the mudded weak interlayer was also investigated using polarized microscopy and scanning electron microscope techniques.Wen et al. [20] investigated the relation between the residual strength and the index properties (particle size distribution, Atterberg limits, and mineral composition) of slip zone soils for 170 landslides in the Three George Project area.The relation can be used to evaluate the residual shear strength of the slip zone soils.Using laboratory triaxial tests, Indraratna et al. [21] found that the peak shear strength of the rock joints with compacted infill (the interlayer) increased with the decrease of the degree of saturation from 85% to 35%.Based on the laboratory observations, they developed an empirical model to describe the shear strength of the rock joints with the weak interlayer.Papaliangas et al. [22] concluded from a series of direct shear tests on sandstone discontinuity models with mean roughness amplitude of 7 mm and filled with weak interlayers of varying thickness that the peak shear strength will decrease with the increase of the thickness of the weak interlayers; however, the residual shear strength will decrease less markedly. Despite the fact that the existing studies [7,8,[12][13][14][15][17][18][19][20][21][22][23] have presented available experimental techniques to investigate the shear behavior of the mudded weak interlayers and the softening strain characteristics, certain unavoidable problems still challenge researchers during the determination of the shear strength.For example, a small-scale sample may differ from the actual rock slope deformation behavior due to the scaling effect.The best method is the full-scale study; unfortunately, this is costly and difficult to execute.Therefore, the traditional laboratory tests and in situ tests have limitations.In consideration of the significance of the determination methods for the shear strength of the mudded weak interlayers and the limitations of the existing testing methods, it is necessary to develop a new approach to study the shear behavior and the strain softening characteristics of the mudded weak interlayers.Although the particle flow code (PFC) is widely used for geotechnical engineering materials [24][25][26][27][28], the predictive models for the shear strength of the mudded weak interlayers based on the PFC have not yet been reported in the literature, to the best of our knowledge. DEM, specially applied to solve the discontinuous medium-related mechanical problem, is suitable for the simulation of sandwich material in slopes.On one hand, the sandwich materials in slopes, although thin and weak, are made of large numbers of weathered rock particles from the perspective of mesostructure.On the other hand, the mechanical simulation based on the continuum mechanics theory cannot connect the macroscopic properties of materials with their microstructure characteristics.However, the DEM simulation, such as PFC, is able to not only reflect the interactions among the particles, but also obtain the macroscopic mechanical properties.In this paper, we aim to determine the peak and residual shear strengths of the natural mudded weak interlayers.For this purpose, a numerical model for the mudded weak interlayers is established using the PFC in two dimensions (PFC 2D), which is a DEM numerical method.To match the numerical model better with the real conditions of the mudded weak interlayers, the mesomechanical parameters were calibrated using laboratory tests.Specifically, due to the limitations of the direct shear and triaxial compression tests (such as the dimensional limit), the uniaxial compression tests and deformation monitoring using digital speckle measurement techniques (ARAMIS) were applied to adjust the numerical model parameters to the optimal values.Thus, the strain softening characteristics of the mudded weak interlayers can be determined through a series of numerical shear tests and the shear strength (including the peak and residual strength) can be obtained from the test results.Finally, the determination method for the shear strength of the mudded weak interlayer presented in this paper is validated by comparison of the prediction results with the existing literature values. Laboratory Investigation To study the shear behavior of the mudded weak interlayers and reasonably predict the shear strength, the PFC 2D was employed to establish the numerical forecast model based on which both the shear failure mechanism and shear strength can be obtained.The mesomechanical parameters of the PFC models cannot be directly measured with experimental methods; instead, uniaxial compression tests have to be carried out on the mudded weak interlayer specimens to calibrate the microparameters of the particles.Deformation monitoring of the specimen surfaces using digital speckle measurement techniques (ARAMIS) provides additional information, which is useful for the calibration. Test Procedure. The uniaxial compression tests for the mudded weak interlayers were carried out using a triaxial apparatus, applying only the axial loads.As mentioned above, the mudded weak interlayer is a kind of thin and weak rock strata, so it is not that likely to obtain a sample that meets the dimension standard (2 : 1 ratio).The authors employed the digital speckle ARAMIS to make an aided analysis of the UCS results.Specifically, the three-dimensional optical measurement system (ARAMIS 3D) was applied to monitor the strain distribution along the surface of the specimen under the axial load.ARAMIS is capable of recording the deformation of the specimens in the uncontact condition with a high precision of 0.001 mm.ARAMIS should be calibrated before the experiment to ensure the precision of the results (Figure 2).Subsequently, the surface of the specimens should be pretreated with digital speckles.The CCD of ARAMIS can obtain the images of the specimen surface with the change of the axial load and transform them to digital signals.The grey scale information of the randomly distributed speckle field on the surface of the object is tackled using the digital image correlation method and the image is divided into a grey-level matrix.The deformation of an arbitrary point on the specimen surface can be measured by investigating the movement of the corresponding subarea.Figure 3 shows the deformation of the subarea on the surface of the specimen. The point ( , ) is used as the center of the selected subarea of the reference image (undeformed) and * ( * , * ) is the center of the subarea of the target image (deformed; Figure 3).Based on the assumption that the point ( , ) is another point in the selected subarea of the reference image, the new coordinate of the point * ( * , * ) in the target image can be described as follows: where and V denote the displacements of the point ( , ) in the directions of and . As mentioned above, all of the monitored specimens must be specially processed, as shown in Figure 4. Firstly, the surfaces of the specimens are sprayed with white paint and are treated as the background.After air-drying of the white paint, the speckles were uniformly sprayed on the specimen surfaces with black paint. Three groups of mudded weak interlayer specimens with special speckles on the surfaces were tested using the hydraulic loading system REY-8000.The maximum axial load of this loading system can reach 100 KN and the maximum displacement is 530 mm.During the entire test, the axial load and displacement were recorded using the axial load cell and the displacement transducer, respectively.Meanwhile, the deformation of the specimen surfaces was recorded with ARAMIS 3D.All of the testing results were used to calibrate the micromechanical parameters of the mudded weak interlayer model in the PFC2D.The loading framework in the displacement-control mode and the loading rate are 5 mm/min, which is higher than the common loading rates of the UCS test.However, the ARAMIS 3D is able to capture the pictures of the specimens with a high frequency.The smooth plastic slices were added to the upper and lower surface of the specimens during the tests, as shown in Figure 5, to eliminate the influence of friction. Typical Stress-Strain Curves from Uniaxial Compression Tests.The failure criterion of the mudded weak interlayers is extremely important for the determination of the maximum vertical stress, which can be applied to the calibration of micromechanical parameters of the PFC models.In stress space, the ideal plastic yield surfaces are supposedly invariable, and the initial yield surfaces are assumed the same as the subsequent yield surfaces [29,30].Therefore, no criteria are available in stress space to determine the failure of geotechnical materials.However, the criteria in strain space can overcome these deficiencies and the permissible deformation can serve as an intuitionistic failure criterion for the mudded weak interlayers, which can be described as follows: where max denotes the maximum permissible strain of the mudded weak interlayer.The value of the maximum permissible strain can be determined by experiments.As presented in [31], the normally consolidated clay was investigated using drained triaxial compression tests with the confining pressures of 70 kPa, 200 kPa, and 700 kPa.The resulting stress-strain curves show that the axial deviatoric stress continues to increase until the strain exceeds 20%, which can be chosen as the maximum permissible strain .In this paper, the maximum permissible strain of the mudded weak interlayer can be determined using a similar approach.Because of the limited amount of mudded weak interlayer specimens available for the uniaxial compression tests, three typical specimens were tested.Figure 6 shows the typical stress-strain curves during the uniaxial compression.The deformation process of the mudded weak interlayer upon increase of the axial load can be divided into two stages (Figure 6).In the first stage, which is also known as the elastic stage, the axial strain of the specimens increases linearly with the increasing load.The slope coefficients of the curves mainly depend on the stiffness of the specimens in this stage and the mudded weak interlayer is not destroyed yet.The second stage is known as the plastic stage.The axial load stays constant when the strain exceeds 10%, while the strain keeps increasing (Figure 6).Therefore, the strain value of 10% can be chosen as the maximum permissible strain of the mudded weak interlayer and the axial load when the strain reaches can be treated as the maximum vertical stress 1 (peak stress; Table 1). The average maximum vertical stress of the three mudded weak interlayer specimens has been obtained as follows: (3) Major Strain Monitoring Results. Because the loading rate is 5 mm/min and the height of the specimen is 20 mm, the loading time is less than 30 s.According to the actual demands of deformation monitoring, the photograph frequency was set to 5 times per second.To record the dynamic response process of the surface strain changing with the vertical stress, the entire deformation monitoring procedure was divided into 7 loading stages.Due to the little strain of the specimen at low loads, which is inconspicuous, the loading time of first stage is 6 seconds.The loading time for each of the second to seventh stages reduces to 3 seconds with the increase of the vertical stress and strain.Based on the mesh generation and unit analysis of the monitored images, the evolutionary trends of the major strain for the mudded weak interlayer specimen under uniaxial compression are shown in Figure 7. Figure 7 shows that local deformation occurs with the increase of the vertical load.It is worth noting that the surface deformation of the specimen is inhomogeneous.The unevenly distributed deformation of the specimen surface is mainly caused by the anisotropy of the inner structure of the mudded weak interlayers.As the load increases further, the plastic zone develops continuously and the shear band begins to occur in the locally weak area.The shear band keeps developing along the longitudinal direction until it crosses the whole specimen. More specifically, the deformation evolution of featured sections and points was obtained from the data collecting and analyzing system of ARAMIS.As shown in Figure 8(a), three sections were selected for the investigation of the local deformation characteristics of the specimens.The section monitoring results are shown in Figure 8(b).Figure 8(b) shows that all the units of Section 3 of the last loading stage are at a high level of strain, with the major strain > 10%.As a result, Section 3 can be treated as the failure surface.In contrast, the major strain of the units in Section 5 remains at a low level and the maximum strain is <7.5%.Finally, the major strain for the units with the same vertical position but different horizontal position in Section 1 is also shown in Figure 8(b). Similarly, four feature points were set on the surface of the specimen to monitor the dynamic changing process of the major strain.Figure 9(a) shows the position of the feature points among which the points 0 to 2 are in the shear band of the mudded weak interlayer and point 3 is in the center of the specimen.The values of the major strain for the points 0 to 2 keep growing linearly during the loading process (Figure 9(b)).However, that of point 3 hovers at zero, which means that the major strain remains stable. Prediction of the Cohesion and the Internal Friction Angle of the Specimens. As known from classical soil mechanics, the failure envelop cannot be obtained because only one Mohr circle of stress can be plotted based on the uniaxial compression testing results.According to the Mohr-Coulomb intensity limit equilibrium of soil, the cohesion and the internal friction angle can be determined only if the break angle of the specimen can be measured.For the unconfined compression test, the limit equilibrium condition can be described as follows: where 1 equals the maximum vertical stress and the break angle can be expressed as 45 ∘ + /2.Thus, the cohesion and internal friction angle can be obtained as follows: There are two notable breaking sections and the break angles can be automatically measured by ARAMIS (Figure 10).The final break angle is determined using the method of averaging based on which the cohesion and internal friction angle can be obtained as follows: It is notable that the cohesion and the internal friction angle are obtained from the Mohr-Coulomb intensity limit equilibrium, which can be regarded as the peak shear strength.However, the residual shear strength of the mudded weak interlayers is still unknown.The test results, including the cohesion and the internal friction angle, are used to calibrate the microparameters of the bonded particles in PFC2D.Based on the calibrated model for the mudded weak interlayers, the strain softening characteristics can be simulated while both the peak and the residual shear indexes can be predicted. Numerical Model for the Mudded Weak Interlayers Based on the Test Results PB model, the CB model has been more widely used because of its reduced number of microparameters [10].Therefore, the CB model is used in the current study.To generate a numerical model for the simulation of the mudded weak interlayers using PFC 2D, the micromechanical parameters are calibrated against the test results as described above. Basic Geometric Parameter Design. According to the actual size of the specimens, the numerical sample has a height of 20 mm and a width of 61.8 mm (Figure 11).During the direct shear test in PFC 2D, eight walls are defined.Subsequently, 2238 particles are generated inside the defined walls after setting the maximum and minimum diameters ( max = 0.45 mm, min = 0.30 mm).Undoubtedly, as DEM numerical software, PFC2D has its own limitations; uncertainty factors may result in different test results, such as that of the particle size and damping set.However, it has been proven to be a useful and reliable tool to study geotechnical materials.This study attempts to optimize the numerical model of the mudded weak interlayers in combination with the laboratory test results and other techniques. Determination of the 2D-Porosity. It is important to note that the PFC model of the mudded weak interlayer is two-dimensional.However, the traditional porosity tests, such as the pycnometer method, only test the 3D porosity.In this paper, the 2D-porosity of the specimen is obtained from the image segmentation of the mudded weak interlayer section micrograph.It is worth mentioning that all the section micrographs of the mudded weak interlayers are captured using the electronic scanning microscope.To eliminate the image noise and enhance the edges, spatial domain enhancement [33] was adopted to address the original section micrographs.This method can reduce the noise of the image effectively while it enhances the contrast of the image (Figure 12).Finally, threshold segmentation is applied to process the section micrographs, which is typically used to locate objects and boundaries (lines or curves).Thus, the solids and pores are separated into two independent parts, which are shown in Figure 13.The porosity can be determined using MATLAB.Mathematically, the 2D-porosity of the whole section micrograph is the statistical average of the local section micrographs in the section micrograph.According to the test results, the porosity of the numerical model is set to 0.279. Setting of Wall Stiffness. In PFC, the force cannot directly act on the walls.Instead, the servomechanism is applied to control the stress of the walls.Both the loading process and the model constraints are accomplished through the walls.Therefore, the setting of the wall stiffness is of great importance to the accuracy of the simulation results.The overlarge wall stiffness would increase the time steps of the initial equilibrium state, while the walls would be pierced by the balls if the wall stiffness is too small.The stiffness of the servowalls is therefore set to one-tenth of the ball stiffness to simulate the flexible boundary of the servowalls.However, the stiffness of the loading walls is ten times that of the ball stiffness. Calibration of the Contact-Bonded Model. In the contactbonded model of the mudded weak interlayer in PFC 2D, the unknown parameters mainly include the particle stiffness (normal contact stiffness and shear contact stiffness ), the friction coefficient, the normal and shear strengths of the contact bond (NB and SB), and the loading rate .To calibrate the micromechanical parameters of the numerical model more efficiently, the sensitivity between the macroand microparameters needs to be analyzed before the calibration process.For this purpose, the influence of the microparameters on the macroproperties can be investigated by changing the values of the mesoscopic structural and mechanical parameters.To make full use of the experimental test results, both the direct shear and uniaxial testing models are established to perform the sensitivity analysis. The macroscopic response mechanism upon change of the microscopic parameters is deduced from the numerical test results (Table 2).The modulus of elasticity mainly depends on the normal contact stiffness and the shear contact stiffness of particles, as well as their ratio / .The macrostrength parameters are closely related to the normal and shear bond strengths (NB and SB) and are affected by the friction coefficient among the particles.Meanwhile, the friction coefficient has an important effect on the residual strength of the numerical model.In addition, the porosity , vertical stress , and the loading rate have a disproportionate level of impact on the macroproperties, which can be seen in Table 2.All the microscopic properties of the particles and bonds are calibrated by trial and error.More specifically, the microparameters are adjusted iteratively to match the numerical testing results to the experimental results.Table 3 shows the calibrated values of the main microparameters at different vertical stress. Strain Softening Characteristics and Numerical Prediction of Peak/Residual Shear Strength As discussed in Section 3.2, all of the essential microscopic parameters are obtained through the calibration based on the laboratory test results.As a matter of fact, the calibrated model of the mudded weak interlayer is available for the numerical tests.Direct shear tests under constant vertical pressures of 50 kPa, 100 kPa, 150 kPa, 200 kPa, 250 kPa, and 300 kPa are simulated using PFC 2D and the results are presented in Figure 14.Note."+++" denotes "in close correlation"; "++" denotes "in important correlation"; "+" denotes "in partial correlation"; and "−" denotes that "there is little correlation" or "there is almost no correlation."The "PS" denotes the peak strength and "RS" denotes "residual strength."Figure 14 shows that the shear stresses increase almost linearly with the shear strains before the stress peaks in the first stage, which is also called the linear elastic stage.The stresses begin to slip down after that, while the shear strains keep rising until they reach a relatively stable stage.Actually, because the physical properties of the mudded interlayers are complex, the functional curves are converging slowly.Thus, the least-squares fitting method is used to obtain an approximate stable value (Figure 15). Figure 15 shows the functional relations between the shear stresses and shear strains of the data plots after the peak point from binomial fitting.The shear stresses gradually approach a steady value, which is the residual shear strength.Both Figures 14 and 15 show that the peak shear strength (PS) and the residual shear strength (RS) of the mudded weak interlayers keep an upward tendency with the increase of the vertical pressure.At the same time, the corresponding strains also increase at different levels.It is notable that the ratio between PS and RS stays in the range from 0.5 to 0.7. According to the direct shear test results under different vertical pressures, PS and RS are plotted as functions of the normal stresses in Figure 16.Least-squares fitting is As mentioned in Section 2.3, the friction coefficient and the cohesion of the mudded weak interlayers estimated by the uniaxial compression test and deformation monitoring is = 19.17kPa and = 20 ∘ .Compared with the numerical predicted peak friction and the cohesion, the similarity index is 96.8% and 90%, respectively. Verification of the Present Prediction Method The PS and RS of the mudded weak interlayer can also be obtained by repeated direct shear tests [6].Hu and Zheng [6,16] surveyed the PS and RS indexes of the common mudded weak interlayers, which appear frequently in mudstone, carbonatite, and carbonaceous shale.In this paper, the mother rock of the mudded weak interlayer is red mudstone.Both the numerically predicted shear strength indexes of the mudded weak interlayer and the values obtained by statistics are presented in Table 4.The peak and friction coefficients determined by the present method are in the range of the statistics.As for the cohesion, the numerically predicted result is a little bigger than the regular values.In general, the predicted strength parameters are in good agreement with the statistic values from the literature.These facts can be used to validate the feasibility of the present method in predicting the shear strength parameters of the mudded weak interlayers. Conclusions In this study, we presented a method to study the strain softening characteristics of the mudded weak interlayers and predict their shear strength indexes.For this purpose, soil experiments, deformation monitoring techniques (ARAMIS), and DEM numerical simulation are employed.To simulate the real state of the mudded weak interlayers, the microscopic parameters of the numerical model are calibrated based on the soil test results.Specifically, the deformation monitoring during the uniaxial compression tests acts as an important benchmark in the calibration process.Furthermore, the calibrated model is used to perform several numerical direct shear tests at different vertical pressures.The shear strength indexes of the mudded weak interlayers are determined using the numerical calculation results.From the comparison of these predicted shear strength indexes with the statistic values, the validity of the present method can be verified.The relatively accurate PS and RS indexes of the mudded weak interlayers can serve as important technical tools to evaluate the side slope stability. Advances in Materials Science and Engineering inappropriately influence their work; there is no professional or other personal interest of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, this manuscript. Figure 1 is a global image of the mudded weak interlayers (a) and the collected mudded weak interlayer specimens (b).All of the specimens were obtained from the right side of the slope of tunnel number 2 of the Longquan mountain, Southwest China, where red mudstone is widely developed.Both the overlying rock strata and the underlying bed of the mudded weak interlayers are red mudstones (Figure1(a)).To keep the structure and the water content of the specimens undisturbed, all the mudded weak interlayer specimens were collected by cutting ring samples, which were sealed with wax (Figure1(b)).Each cylindrical specimen has a diameter of 61.8 mm and a height of 20 mm. Figure 2 : Figure 2: The hydraulic loading system (a) and the calibration of the ARAMIS 3D optical deformation measuring system (b). Figure 3 :Figure 4 : Figure 3: The deformation of the subarea on the surface of the specimen. Figure 6 : Figure 6: Axial stresses as functions of the axial strains for the three specimens. 7 Figure 7 : Figure 7: Evolutionary trends of the major strain of the mudded weak interlayer specimen, under uniaxial compression. Figure 8 :Figure 9 : Figure 8: The locations of the three selected sections on the surface of the mudded weak interlayer specimen (a) and the monitoring results of the major strain (b). Figure 11 : Figure 11: Direct shear model of the mudded weak interlayer in PFC 2D. Figure 12 :Figure 13 :Figure 14 : Figure 12: Original image of the section micrograph of the mudded weak interlayer (a) and the processed image using enhancement domain methods (b). Figure 15 : Figure 15: Functional relations between the shear stresses and shear strains of the data plots after the peak point from binomial fitting. the PS Fitting curves of the RS Figure 16 : 7 ) Figure 16: The peak shear strength (PS) and the residual shear strength (RS) as functions of the normal stress. Table 1 : Maximum vertical stresses 1 (peak stress) for the three specimens. Table 2 : Macroscopic response mechanism upon change of microscopic parameters. Table 4 : Numerically predicted shear strength of the mudded weak interlayer and the values obtained by statistics.
2018-12-27T16:29:28.941Z
2017-04-30T00:00:00.000
{ "year": 2017, "sha1": "bd8027a35c5b80f79bbe66f43235062f5163af6a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/amse/2017/9641258.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bd8027a35c5b80f79bbe66f43235062f5163af6a", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Materials Science" ] }
52095306
pes2o/s2orc
v3-fos-license
Modeling the impact of drug interactions on therapeutic selectivity Combination therapies that produce synergistic growth inhibition are widely sought after for the pharmacotherapy of many pathological conditions. Therapeutic selectivity, however, depends on the difference between potency on disease-causing cells and potency on non-target cell types that cause toxic side effects. Here, we examine a model system of antimicrobial compound combinations applied to two highly diverged yeast species. We find that even though the drug interactions correlate between the two species, cell-type-specific differences in drug interactions are common and can dramatically alter the selectivity of compounds when applied in combination vs. single-drug activity—enhancing, diminishing, or inverting therapeutic windows. This study identifies drug combinations with enhanced cell-type-selectivity with a range of interaction types, which we experimentally validate using multiplexed drug-interaction assays for heterogeneous cell cultures. This analysis presents a model framework for evaluating drug combinations with increased efficacy and selectivity against pathogens or tumors. C ombination therapies are common in the treatment of cancer and infectious diseases 1,2 . The use of drug combinations is motivated by evidence that they can achieve cure rates superior to monotherapies [3][4][5] . Drug combinations may be classified as synergistic or antagonistic when the observed effect of the combination is greater or lesser, respectively, than is expected based on the components' effects as single agents 6,7 . Consequently, much effort has been applied to the task of identifying synergistic combinations. Drug-interaction screens identify combinations with increased efficacy against specific cell lines or phenotypes 8 , and computational methods aim to predict drug synergies using chemogenomics 9 , genetic interactions 10 , and physicochemical properties 11,12 . However, consider a drug combination that is synergistic against pathogenic or cancerous cells, but also has synergistic toxicity to healthy host cells. In this case, there will be no benefit to the therapeutic window, being the difference between the dose required for the desired effect and the dose-limiting toxicity. It is apparent that the efficacy of a synergistic combination is entirely dependent on avoiding synergistic toxicity to unintended cell types, as shown by Lehar et al. 13 and reviewed in Bulusu et al. 14 . Extending this idea, it has been debated that it is not synergy itself that is pharmacologically useful, but differential drug interactions between cell types, with the essential goal being a more favorable interaction on the target cell type than on nontarget cell types [15][16][17] . To test this idea, we implemented an experimentally tractable system to systematically characterize how cell-type-specific drug interactions affect the selectivity of combination therapies, by profiling combinations of antifungal drugs applied to the yeast species Saccharomyces cerevisiae and Candida albicans. Individual differences in single-drug sensitivity constitute therapeutic windows that select for single cell-type; however combinations of drugs may have selectivity that varies from individual agents. We used a sensitive screen to assess all 66 pairwise interactions of 12 antifungal small molecules (henceforth "drugs") in C. albicans, selected for direct comparison to a recent drug-interaction data set in S. cerevisiae 10 to determine differences in selectivity due to single agents vs. combinations. Our model framework and subsequent mixed culture assays show that therapeutic windows may be enhanced or diminished by differential drug interactions. Results Precise assessment of drug interactions in two model yeasts. We used a sensitive 8 × 8 checkerboard assay to assess all 66 pairwise interactions of 12 antifungal drugs (Table 1), in C. albicans and S. cerevisiae 10 . This screen included drugs that target DNA, cell wall, and metabolism as well as microtubule, phosphatase, and kinase inhibitors. All C. albicans experiments were conducted in this study. Eight S. cerevisiae experiments were newly conducted for this study: methyl methanesulfonate tested against itself, bromopyruvate, calyculin A, dyclonine, fenpropimorph, haloperidol, rapamycin, and tunicamycin. Other experimental data involving S. cerevisiae were obtained from Cokol et al., 2011. Drug interactions were quantified by isobologram analysis; briefly, each interaction score (α) for a drug pair was calculated from the concavity of the isophenotypic contours that map regions of similar growth inhibition across the drugconcentration matrix. To produce a reference that is by definition non-interacting, we measured "self-self" interactions (drugs combined with themselves) for ten drugs in both yeast species. This produced interaction scores tightly distributed around zero (mean = −0.01, std. dev. = 0.4) and defined 95% confidence intervals for deviation from additivity. Synergy and antagonism were thereby identified from these confidence intervals as α < −0.8 or α > + 0.8, respectively (Fig. 1a). Among the 66 drug combinations tested in C. albicans, 20 synergistic and 27 antagonistic drug pairs were identified (Fig. 1b). Drug interactions were substantially, but not perfectly conserved between C. albicans and S. cerevisiae (Spearman correlation test r = 0.42, p-value = 1.8 × 10 −4 ) (Fig. 2). Synergistic, but not antagonistic, interactions significantly overlapped in these related species (Fisher's exact test, shared synergy: p < 3 × 10 −5 , shared antagonism: p = 0.44). Notably, nine combinations had highly divergent interactions, being synergistic in one species and antagonistic in the other, suggesting that drug combinations may be used to selectively inhibit a particular cell type. Cell-type selectivity of individual drugs. In order to understand the relationship between drug interactions and cell-selective inhibition, we first considered the selectivity of individual drugs. The concentrations required for 50% growth inhibition of C. albicans (IC50 alb ) and S. cerevisiae (IC50 cer ) were correlated between species (r = 0.91, p < 10 −13 ; Supplementary Fig. 1), but most drugs had a therapeutic window for cell-selective inhibition due to a two-fold or greater difference in IC50 between cell types. We defined the selectivity score of a single-drug "A" (selectivity A ) as log 2 (IC50 A,alb / IC50 A,cer ), such that a score of 0 indicates no selectivity, and a score of 1 (or −1) denotes that twice (or half) as much drug is required to inhibit C. albicans compared to S. cerevisiae. Neither species was on average more drug-sensitive or resistant than the other (no significant bias in selectivity scores by sign test, p = 0.77). Benomyl and tunicamycin had the greatest single-agent selectivity for C. albicans (3.8 and 2.4), while staurosporine and bromopyruvate were most selective for the growth of S. cerevisiae (−1.7 and −1.6). Drug interactions alter the selectivity of drug combinations. We explored the impact of drug interactions on selectivity by superimposing the isophenotypic contours from drug-interaction experiments for each cell type, for the greatest level of inhibition present in interaction data sets for both species (mean inhibitory level = 0.31; std. dev. = 0.17). This visualization shows the regions of selectivity between the drug-interaction contours and allows the comparison of the selectivity of a combination with the All drug names are provided as well as abbreviations used in figures, PubChem ID and IC50 (µg/mL) for each drug tested in C. albicans (alb) and S. cerevisiae (cer). IC50 levels were determined with yeast cells grown overnight and diluted in liquid culture to OD 600 = 0.1. S. cerevisae and C. albicans concentration-response experiments were conducted in parallel to allow for direct comparison of IC50 levels selectivity of individual drugs (Fig. 3, left panel). Selectivity of a combination "A + B" was determined similarly to the single-drug selectivity, by the ratio of total drug concentrations required to achieve an equal level of inhibition (Methods), where selectivity A+B = log 2 (IC A+B,alb /IC A+B,cer ), similar to a previously defined selectivity index 13 ( Supplementary Fig. 2, upper panel). Selectivity of self-self combinations was almost perfectly correlated with their single-drug selectivity (r = 0.99, p = 3.6 × 10 −5 ), as expected from first principles. Combination selectivity was unaltered when two drugs have self-self or additive interactions (e.g., methyl methanesulfonatemethyl methanesulfonate, Fig. 3a, left). However, for drug pairs whose interactions vary between cell types, the selectivity of the combination diverged from what is anticipated from its component drugs. Pentamidine and staurosporine (PEN + STA) each preferentially inhibit C. albicans, and because they are synergistic only in C. albicans, their combination enhances selectivity for the growth of S. cerevisiae compared to either single drug (Fig. 3b, left). Antagonistic interactions can also enhance selectivity: methyl methanesulfonate and rapamycin (MMS + RAP) each preferentially inhibit S. cerevisiae, and in combination produce an especially strong antagonism in C. albicans which enhances their selectivity for the growth of C. albicans (Fig. 3c, left). Differential interactions can both strengthen and weaken selectivity: pentamidine and fenpropimorph (PEN + FEN) both preferentially inhibit C. albicans, but are antagonistic only in this cell type (Fig. 3d, left), which causes diminished selectivity. A yet more striking result ensues from the divergent interactions of calyculin A and dyclonine (CAL + DYC): though each drug alone preferentially inhibits C. albicans, their combination demonstrates such potent synergy only in S. cerevisiae that their cell-type selectivity is inverted and is therefore expected to select for the growth of C. albicans (Fig. 3e, left). In order to compare the observed selectivity of combinations with a null model, we approximated expected selectivity (selectivity exp ) as the combination selectivity that would be observed if drugs A and B have additive interactions in both species (Fig. 3, middle column) ( Supplementary Fig. 2, lower panel). Comparing expected and observed selectivity across the complete set of drug combinations (example comparisons in Fig. 3, right column), we found that 41 of 66 combinations showed a significant difference in selectivity from additive Fig. 3). Thus, differential drug interactions powerfully and quite commonly influence the cell-type specificity of drug combinations, with the effect of enhancing or diminishing therapeutic windows. A common goal of drug combination design is to identify drug pairs that synergistically inhibit the intended target cell type while not producing synergy in other cell types. However, in our study we observed that drug pairs with selectivity for the growth of S. cerevisiae or C. albicans may be synergistic, additive or antagonistic in either species ( Supplementary Fig. 4). Accordingly, drug combinations showing significant selectivity against one species were not enriched for synergistic interactions in that species (Fisher's exact test, p > 0.05). We hypothesized that selectivity is associated with the difference of drug interactions between cell types. As a measure of drug-interaction difference, we calculated delta-α, the difference of α scores between C. albicans and S. cerevisiae (α alb −α cer ). Delta-α scores are high for pairs that are antagonistic against C. albicans and synergistic for S. cerevisiae. Overall, there was a weak but significant correlation (Spearman correlation test, r = 0.26, p = 0.02) between cell-selective growth inhibition and delta-α (Supplementary Fig. 5). Therefore, we conclude that combinatorial selectivity is influenced by the difference of two-drug interactions, neither of which is necessarily synergistic. In order to understand the effect of antimicrobial resistance on therapeutic selectivity, we modeled the effects of 100-fold resistance on selectivity metrics for all tested drug pairs. We assumed that isophenotypic contours scaled with changes in drug sensitivity 18 and simulated resistance by multiplying the minimal inhibitory concentration of one compound by 100 while preserving the shape of the drug-interaction isobole. We observed that delta-α and sel-sel exp are not significantly correlated after simulating for resistance, suggesting that extreme drug resistance is more influential on selectivity than variation in drug interactions ( Supplementary Fig. 5). Validation of the selectivity model in co-cultures. Here we have modeled the selectivity of combinations of drugs to different fungal species. However, it is worth noting that sensitivity to drug combinations was tested separately for each species, and not together. To experimentally test the predicted selectivity change due to drug interactions, we conducted co-culture assays with fluorescently labeled strains of S. cerevisiae (mCherry + , GFP−) and C. albicans (GFP + , mCherry−). We created a mixed culture of two fluorescently labeled yeast species with approximately equal number of cells from both species based on flow cytometry. Mixed cultures were treated with two individual drugs or their combination, incubated for 4 h, and assessed with flow cytometry for the %C. albicans and %S. cerevisiae after treatment (Fig. 4a). Drug-free controls were used as a reference to confirm singledrug selectivity in the context of yeasts with different growth rates. For each experiment, we computed a selectivity score following the same formula as our model: log 2 (C. albicans/ S. cerevisiae). Importantly, since the growth rate of C. albicans is faster than S. cerevisiae, it is expected that the %C. albicans in the no drug condition will increase as compared to the initial ratio. In these experiments, we used two-drug pairs with striking phenotypes illustrated in Fig. 3: (i) CAL + DYC is synergistic in both species but the synergy is stronger in S. cerevisiae. According to our model, each of these drugs is expected to select for S. cerevisiae, however the combination is expected to select for C. albicans due to the inverted selectivity (Fig. 3e). Figure 4b confirms the expectation that %C. albicans in the no drug condition increases in the absence of selective pressure. In agreement with the single-species experiments, the selectivity scores for the CAL or DYC treated cultures were lower than the no drug condition, indicating that each of these drugs selects for S. cerevisiae growth. The combination CAL + DYC had a higher selectivity score than the no drug condition, indicating that the combination selects for C. albicans, thereby validating the prediction of inverted selectivity under treatment with the CAL + DYC combination. (ii) MMS + RAP is antagonistic in both species but the antagonism is stronger in C. albicans. According to our model, each of these drugs is expected to select for C. albicans, and the combination is expected to have a higher selectivity for C. albicans than either drug, due to enhanced selectivity by antagonism (Fig. 3c). In agreement with our model, we observed that the selectivity score for MMS or RAP were higher than the no drug condition, indicating that these drugs individually select for C. albicans. The selectivity score for the combination MMS + RAP was higher than either single drug, validating the predicted enhanced selectivity by antagonism (Fig. 4c). All data collected for these experiments are presented as Supplementary Fig. 6. In order to extend our approach to alternative phenotypes, we conducted drug-interaction experiments for fungicidality in mixed cultures of fluorescent C. albicans and S. cerevisiae (Fig. 5a). Among all 12 drugs studied, only MMS and RAP exhibited acute strong fungicidal activity, hence, were amenable to an assay of selective cell killing (Supplementary Fig. 7). This combination is antagonistic in both C. albicans and S. cerevisiae, but our analysis suggested enhanced selectivity for the growth Supplementary Fig. 8). We assessed drug interactions for fungicidal activity by co-culturing yeast strains in a 5 × 5 combination matrix of MMS and RAP for 1 h, plating cells and enumerating cell killing by counting fluorescent colony-forming units (CFU). In strong agreement with the single-species drug-interaction experiments (Fig. 3c), we observed that MMS + RAP is antagonistic for fungicidal activity in both species, but to a stronger degree in C. albicans (Fig. 5b, c, Supplementary Fig. 9). Consistent with the superimposed growth isoboles, we observed that the low MMS-high RAP region is powerfully selective, killing more than 99% of S. cerevisiae cells with less than 50% fungicidal effect on C. albicans. Importantly, each of MMS and RAP alone have similar fungicidal concentrations for both species, and are incapable of exerting such effective cell-selective killing as single agents. Discussion The ultimate goal of synergistic drug combinations is to enhance the therapeutic window between efficacy and toxicity 13 . Druginteraction screens may identify combinations with increased efficacy and selectivity for specific cell lines or phenotypes 8 . In this study, we showed that while synergistic combinations can indeed increase the cell-type selectivity of growth-inhibiting Fig. 3 Drug interactions may enhance, diminish or invert selectivity. Left panel: Observed isophenotypic contours of drug-interaction assays for S. cerevisiae (magenta) and C. albicans (green) are overlaid in a 2D grid adjusted for relative concentration. We linearly transformed the isophenotypic contours for drug-interaction assays so that S. cerevisiae's isophenotypic contour intercepted both x and y axes at 1. Selectivity of a combination was determined by the log ratio of the distance from the origin to the C. albicans vs. S. cerevisiae contours (log 2 (d albicans /d cerevisiae )). Selectivity is therefore positive for drugs or combinations that select for C. albicans. Middle panel: the middle panel demonstrates the null model for expected combination selectivity, assuming drug pairs are additive. As in observed selectivity, expected selectivity is calculated based on the log ratio of distances from the origin with positive expected selectivity corresponding to an expected selectivity for C. albicans, in the absence of synergistic or antagonistic drug interactions. 12 , the same is also true of antagonistic combinations, because it is the difference in drug interactions between cell types that enhances or diminishes the therapeutic window. Here, we provided a proof-of-concept that drug interactions may shift selectivity with respect to single-drug effects in mixed microbial communities. Flow cytometry assessment of mixed yeast cultures illustrated that a strong synergistic interaction between calyculin A and dyclonine in S. cerevisiae selected for the growth of C. albicans, as expected. However, for the combination of MMS and rapamycin, the strength of antagonism selected for C. albicans, both in growth and survival assays. Importantly, synergy does not guarantee enhanced selectivity, with synergistic "off-target" effects capable of diminishing or even inverting the therapeutic selectivity. We found that synergistic drug interactions for the 12 antifungals tested were significantly conserved between these two yeast species, while antagonistic interactions were not conserved. A likely explanation for this is promiscuous synergy in which one drug can affect the bioavailability of many other drugs, e.g., via effects on membrane composition. Indeed, it seems likely that much of the synergy for drugs targeting ergosterol biosynthesis in this study (DYC, FEN, HAL, TER) is due to increased bioavailability of partner drugs. Pentamidine has also been previously identified as a promiscuously synergistic drug 10 , although the mechanisms underlying this promiscuity remain unknown. By contrast, only 3 of the 12 antifungals (BEN, BRO, STA) from our panel have previously been identified as frequently participating in antagonistic interactions 19 . We used the checkerboard assay for a full appreciation of interaction and selectivity as a proof-of-principle and found that selectivity scores at θ = 45 are significantly correlated with selectivity scores at θ = 23 and θ = 66. This indicates that a simplified method for determining selectivity for equi-inhibitory quantities of two drugs may provide a useful approximation of the selectivity of drug combinations [20][21][22] . Methods such as multiplex ELISA, PCR, and gene sequencing allow cost-effective experiments. Drug-interaction assays are generally conducted using a single microbe type or cell line. With the co-culture method we described, the interactions for more than one species can be measured in one experiment. Our study uses a multiplexed drug-interaction assay, where the interaction is simultaneously determined for multiple species in a heterogeneous culture. We propose that this approach could be applied to mixed cultures of cancer cell lines tagged by DNA barcodes 23 in order to efficiently identify drug combinations with selective synergy against specific cancer genotypes. Though differential drug interactions are common and, we propose, important in the design of combinations, they were S.c. Fig. 4 Co-culture experiments validate the selectivity model predictions. a mCherry expressing S. cerevisiae and GFP expressing C. albicans cells were cocultured in single or combination drug treatments in liquid media and growth of each species was quantified using flow cytometry. b Selectivity scores (log 2 (C. albicans/S. cerevisiae)) for mixed culture assays before treatment (t0), at no drug condition, CAL, DYC, and CAL + DYC co-culture experiments are shown (n = 2). Also shown is the average selectivity from CAL and DYC conditions, which is the expected selectivity in the absence of drug interactions. Comparison of the no drug condition to t0 shows that the amount of C. albicans in co-culture increases without any selective pressure, which is expected due to the shorter doubling time of C. albicans. Comparison of CAL and DYC to no drug condition validates the model prediction of single-drug selectivity for S. cerevisiae. Comparison of CAL + DYC to no drug condition indicates that the combination is selective for C. albicans, as predicted by the selectivity model (inverted selectivity). c MMS and RAP both individually select for C. albicans. As predicted by the model, the MMS + RAP combination has greater selectivity for C. albicans (selectivity increase due to antagonism). (n = 2) observed against an overall strong conservation of drug interactions between two species separated by hundreds of millions of years of evolution 24 . Thus, these results provide a strong rationale for screening drug interactions in model organisms or cell lines to prioritize promising combinations for testing in related pathogens. We also predict that drug combinations that are synergistic against a drug-sensitive cell type may remain therapeutically relevant against drug-resistant strains whose genetic similarity remains high. This prediction is in agreement with a recent study that showed that gene deletions rarely lead to a change in drug interactions among E. coli non-essential gene deletion strains 25 . Similar reasoning predicts that drug combinations that are synergistic against cancer cells may be accompanied with synergistic toxic side effects given the inherent similarity between cancer cells and normal cells from the same patient. While the discovery of synergistic anti-cancer drug combinations is a growing area of research, the therapeutic potential of synergistic drug pairs must be pursued with caution considering the possibility of enhanced toxicity. For example, the use of combination immunotherapy in melanoma increases the objective response rate by 14% but at the cost of more than doubling the rate of toxic effects so severe that 36% of the patients discontinue treatment 26 . We note that while our co-culture assay is an exciting means to detect selectivity in heterogeneous cell cultures, our analytical model is also applicable to other clinically relevant selectivity considerations. Multiplexed drug-interaction assays may be employed to detect selectivity under very specific conditions, for example, commensal vs. pathogenic microbes that that may be cultured together. These assays are especially useful to simultaneously measure drug interactions for drug resistant and sensitive microbes in order to identify concentration regimes of the twodrug space that specifically select against drug-resistant strains 15 . However, in many therapies, toxicity occurs at tissues that are not at the affected site of infection or disease, and are therefore not amenable for co-culture. For example, ototoxicity arising from aminoglycoside antibiotics 27 ; nephrotoxicity arising from vancomycin, aminoglycosides, and some beta-lactams 28,29 ; cardiotoxicity from the chemotherapy doxorubicin and trastuzumab 30 ; or peripheral neuropathy associated with chemotherapies including cisplatin, vincristine, and paclitaxel are all clinically observed drug toxicities that could be modeled by this framework of selectivity 31 . By superimposing two systematic drug-interaction experiments in distantly related yeast species, we generated a framework for measuring selectivity of individual drugs and their combinations. The analysis developed for this study provides a model for assessing drug efficacy vs. side effects for combinations. This strategy has immediate applications to the evaluation of therapeutic potential of combination therapies or predicting adverse side effects. Further studies may assess selectivity of drug combinations in cancer vs. normal cells to limit toxicity, or pathogenic vs. commensal microbes (e.g., S. aureus vs. S. epidermidis) to preserve the microbiome under antibiotic treatment. Methods Drug-interaction assessment. BEN, BRO, FEN, HAL, MMS, PEN, and TER were purchased from Sigma-Aldrich. CAL, RAP, STA, TAC, and TUN were purchased from AG Scientific; DYC was purchased from Toronto Research Chemicals (Table 1). All drugs were dissolved in DMSO or water and stored at −20°C. For C. albicans strain SC5314, yeast cells were grown in YPD (1% yeast extract, 2% bactopeptone, 2% glucose) overnight and diluted to an OD 600 of 0.1 in YPD with the desired drug concentrations controlled for final solvent concentration of 2% DMSO at 30°C. Yeast cells were grown in liquid culture in an 8 × 8 grid on 96-well plates with linearly increasing quantities of drug on each axis from zero drug to approximately minimal inhibitory concentration. Plates were incubated for 16 h in a Tecan Genios microplate reader; with OD 595 readings every 15 min. Additional drug-interaction assays (MMS tested against MMS, BRO, CAL, DYC, FEN, HAL, RAP, and TUN), in S. cerevisiae strain BY4741 were conducted using the same setup as described for C. albicans, with a duration of 24 h. The raw data for these experiments are provided as a Supplementary Data 1 (also available at (https://doi. org/10.6084/m9.figshare.6849068)). We used the area under OD 595 curve of each condition as a metric of cell growth, and standardized growth level to the drug-free condition. Alternative growth metrics such as slope of growth curve and end-point Fig. 10). Drug-interaction scores obtained using variable growth metrics also strongly correlated ( Supplementary Fig. 11). A drug-interaction score (α) was defined by assessing the concavity of the longest isophenotypic contour in the drug-interaction grid 10 . The Loewe additivity model for drug interactions shows that isophenotypic contours are straight lines (α = 0) for a drug "combined" with itself (a "self-self" combination), which serves as the reference that defines non-interacting, or "additive" combinations 6,32 . Selectivity assessment. To assess selectivity of drug combinations for a specific yeast strain, isophenotypic curves at the greatest level of inhibition observed in both species are superimposed on a drug-interaction grid adjusted for individual strain concentration-response. Linear interpolation of the area under the growth (OD 595 ) curve was used to identify common inhibitory levels in the 8 × 8 checkerboard of drug response. Considering drug-interaction contour plots in polar coordinates, an angle reflects the relative fraction of each drug within a pair: as θ changes from 0 to 90 degrees, the fraction of drug A increases and the fraction of drug B decreases. The x-(θ = 0) and y-(θ = 90) intercepts represent the relative inhibitory concentration of drug B and A alone. The distance (d) from the origin of each point along the isophenotypic curve for each species at each angle represents the relative amount of each drug combination to achieve the selected level of inhibition. The selectivity score for C. albicans compared to S. cerevisiae is defined as log 2 (d albicans /d cerevisiae ) for θ = 45. Selectivity scores at θ = 45 significantly correlated with selectivity scores at θ = 23 and θ = 66 (Spearman correlation test, r = 0.95, r = 0.94, respectively; Supplementary Fig. 12) and we therefore used selectivity at θ = 45 for further comparisons of selectivity. Selectivity scores obtained under variable growth metrics also strongly correlated ( Supplementary Fig. 13). To account for selectivity discrepancies between single drugs in a combination, we computed an expected selectivity metric assuming additive interactions between drugs in both species. Expected selectivity was assessed by connecting each species' set of x-and y-intercepts with a straight line and computing a selectivity score based on the relative distance from the origin to each contour for θ = 45. Co-culture combination treatment assay and flow cytometry. S. cerevisiae (mCherry) and C. albicans (GFP) were grown in YPD liquid culture overnight at 30°C to OD 600 = 0.5, diluted to OD 600 = 0.1, and combined in approximately equal number of cells based on flow cytometry. Cells were then co-incubated on 96-well plates in no drug, single drugs or 1:1 ratio combination of drugs. Final concentrations in single-drug conditions were as follows: (CAL) = 4 μg/ml, (DYC) = 500 μg/ml, (MMS) = 50 μg/ml, (RAP) = 1 ng/ml. Each well had a final volume of 160 μl with a solvent concentration of 2% DMSO. Cells were incubated for 4 h shaking at 900 rpm, at 30°C. This incubation period maintains culture heterogeneity as the proliferation rates as C. albicans doubling time is shorter than S. cerevisiae (2 h vs 2.5 h). Strains with similar growth rates may be amenable to coculture for longer duration as in previous studies 33 . Cell/drug mixtures were assessed for the relative abundance of each yeast species by flow cytometry. For all experimental conditions, >20,000 events were acquired using an Attune NxT Flow cytometer. Events were gated by forward and side scatter, and fluorescence distributions were calculated in FlowJo. Single cell cultures were used to define the gates for GFP+ and mCherry+ yeasts, representing C. albicans and S. cerevisiae, respectively. Multiplexed fungicidal drug-interaction assay. S. cerevisiae (mCherry) and C. albicans (GFP) were grown in YPD liquid culture overnight at 30°C, diluted to OD 600 = 0.02, and combined in equal volume and cell density (CFU/ml). These yeasts had similar growth rates and concentration-responses to unlabeled strains. Cells were then co-incubated on 96-well plates in 5 × 5 grid with two-fold serial dilutions of MMS (max concentration, 2 mg/ml) and RAP (max concentration, 1 μg/mL) on each axis, including a zero-drug condition. Each well had a final volume of 100 μl with a solvent concentration of 2% DMSO. Cells were incubated for 1 h shaking at 600 rpm, at 30°C in a ThermoFisher microplate shaker. Cell/ drug mixtures were then diluted 1/10 in YPD and 50 μl of diluted cells from each condition were transferred to individual YPD-agar plates for enumeration. After 48 h of incubation at 30°C, plates were photographed with a custom-built fluorescence imaging "Macrosope" device 34 to visualize bright-field (1/10), GFP (0.4"), and mCherry (3.2") with aperture 5.6 and ISO 100. The colonies were then enumerated using ImageJ colony counter (size: 400-6000 pixels 2 , circularity: 0.85-1). Cell lines. Wild-type C. albicans and S. cerevisiae were purchased from ATCC. Fluorescent C. albicans and S. cerevisiae were kindly provided by the Cowen Lab of University of Toronto and the Springer Lab of Harvard Medical School, respectively. Code availability. Codes to generate interaction and selectivity metrics are available upon request. Data availability. The data that support the findings of this study are available from the corresponding author upon request. Newly conducted drug-interaction assays and growth assays are available at (https://doi.org/10.6084/m9.figshare. 6849068).
2018-08-28T07:29:53.410Z
2018-08-27T00:00:00.000
{ "year": 2018, "sha1": "56033e685144ec15451d81aa444506b6306e9f68", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-05954-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56033e685144ec15451d81aa444506b6306e9f68", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
55682751
pes2o/s2orc
v3-fos-license
Communicating Hydrocephalus in a Case of Long-Term Primary Hyperparathyroidism We present the rare case of a 47-year-old woman with protracted primary hyperparathyroidism complicated by communicating hydrocephalus and cerebellar tonsillar herniation secondary to calvarial thickening. The parathyroid glands remained elusive, despite the use of advanced preoperative imaging modalities and three neck explorations. The serum calcium was optimally controlled with cinacalcet and alfacalcidol. Awareness of this rare complication is essential for early diagnosis and prompt intervention to prevent fatal posterior brain herniation. INTRODUCTION Although hypercalcemia caused by primary hyperparathyroidism (PHPT) is common, advanced hyperparathyroid skeletal manifestations are now rarely encountered, due to the higher rate of early diagnoses and successful definitive surgical treatments. 1 Unfortunately, PHPT secondary to elusive parathyroid adenomas has remained a challenging entity, presenting a dilemma to endocrinologists managing these patients. 2 Here, we present a rare complex case of PHPT with elusive parathyroid adenomas complicated by florid advanced skull manifestations causing communicating hydrocephalus. CASE A 47-year-old female presented to our clinic with a 40-year history of persistent hypercalcemia secondary to PHPT. During the time that she was followed up in the endocrine clinic, corrected serum calcium ranged from 2.52 to 2.89 mmol/L [normal value (NV) 2.14-2.58], serum phosphate from 0.76 to 1.32 mmol/L (NV 0.74-1.52) and serum parathyroid hormone (PTH) from 9.73 to 13.9 pmol/L (NV 1.3-7.6). Her menstruation was regular. She presented with facial deformities caused by multiple mandibular swelling at 8 years old. The investigations did not suggest McCune-Albright syndrome, familial hypocalciuric hypercalcemia or multiple endocrine neoplasia type 1 or 2a. There was no significant family history. She underwent two corrective surgeries for her mandibular deformities. Histopathologic examinations of the excised mandibular tissues confirmed the presence of brown tumors. Preoperative imaging modalities included neck ultrasonography; angiography to assess the neck vasculature; selective parathyroid venous sampling; radioisotope bone scintiscan (with 12 mCi of technetium-99m methylene diphosphonate) or sestamibi imaging; and sestamibi imaging with single-photon emission computed tomography (SPECT). However, the discordant results of these advanced imaging modalities were not helpful in localizing the parathyroid glands. These issues provided valuable insights into the challenges during her three unsuccessful neck explorations at age 8, 12 and 40. This patient was lost to follow up in 2000. Ten years later in 2010, she was admitted with acute cholecystitis secondary to cholelithiasis. She had an uneventful cholecystectomy. During admission, she was also assessed for complications of PHPT. Bilateral nephrocalcinosis, corneal calcifications and worsening skeletal manifestations of PHPT were found. The skeletal changes included periosteal bone resorption and acroosteolysis of the hands (Figure 1). There was no evidence of fractures and renal dysfunction. At the time when the patient was diagnosed with PHPT, sodium dihydrogen phosphate was the medical treatment for persistent hypercalcemia. Subsequently, cinacalcet was added when it became available. Unfortunately, she experienced adverse adverse effects from cinacalcet: persistence of poor appetite, nausea and vomiting led discontinuation of the drug for one month. When cinacalcet was reinitiated, the dose was titrated upward gradually to the optimal dosage of 25 mg twice daily without severe adverse gastrointestinal effects. Ergocalciferol (vitamin D2), at a dosage of 0.25 µg daily (titrated based on the calcium level) was also started for her subnormal serum 25(OH)D level. Figure 1. AP radiograph of the left hand, cropped to accentuate osseous detail. Areas of subperiosteal resorption are seen markedly at the radial aspect of the third middle phalanx, and subtly at the radial aspect of the second middle phalanx (arrows). Acro-osteolysis is also evident at the distal phalanges (arrowheads). The patient was closely monitored for recurrent swelling and purulent discharge associated with mandibular osteomyelitis. Her alkaline phosphatase (ALP) levels remained relatively stable (600 to 870 U/L) for the first 24 years (NV 40-150 U/L). The subsequent two-fold to threefold rise in the ALP (1,600 to 2,000 U/L) correlated with recurrent osteomyelitis in the mandible. Computed tomography (CT) showed thinning and sclerotic changes in the mandible with multiple lucent areas ( Figure 2). Cystic lesions with well-circumscribed sclerotic margins were suggestive of brown tumors. However, in view of persistent osteomyelitis, no biopsy was carried out. Hydrocephalus on CT was an incidental finding while monitoring the chronic mandibular osteomyelitis. Clinically, she was asymptomatic. She had no papilledema, neurological deficit nor any gait abnormality. Subsequent magnetic resonance imaging revealed a communicating hydrocephalus and diffuse calvarial thickening with intracortical tunnelling ( Figure 3). This calvarial thickening caused a reduction in the posterior cranial fossa, including the foramen magnum. The resulting compression on the cerebellum and anterior cervico-medullary junction caused cerebellar tonsillar herniation. A ventriculo-peritoneal shunt was inserted with an uneventful post-surgical course. DISCUSSION The patient exhibited a complex clinical course with multiple disease complications due to elusive abnormal parathyroid glands. The most unusual and significant complication was the complex florid osteitis fibrosis cystica change in the skull. Diffuse calvarial thickening of the skull reduced the posterior cranial fossa and obliterated the foramen magnum, causing local compression on the cerebellum and anterior cervicomedullary junction, leading to cerebellar tonsillar herniation and communicating hydrocephalus. In contrast, the other only case of hydrocephalus was due to the local compression of a brown tumor secondary to PHPT in the maxilla. 3 In addition, the presentation of multiple mandibular brown tumors secondary to asymptomatic PHPT in a child of less than 10 years old has not been reported. Although the previous series of skull brown tumors consisted predominantly of women, they were young and middle-aged adults. 4 In a review of 16 cases involving brown tumors of the skull base, the mean age was 32 years old, and 75% of the patients were women. 5 In another case series of 22 patients with maxillo-facial brown tumors, 91% were women, with a mean age of 51 years old. 6 The mechanism pertaining to these florid skull changes is not clear. Excess PTH results in an increase in osteoclastic resorption with subsequent fibrous replacement and reactive osteoblastic activity. 7 Osteosclerotic changes are an unusual feature of PHPT, and only 3 cases of multiple skull osteosclerotic lesions in PHPT patients have been reported. One was a 26-year old man with PHPT, while 2 women had coexisting vitamin D deficiency. 7,8 Diffuse and patchy osteosclerosis has been described in cases of secondary hyperparathyroidism in renal osteodystrophy and vitamin D deficiency. 9,10 This phenomenon has been postulated to be a disproportionate increase in the osteoblastic response after prolonged osteoclastic activity. 11 Our patient exhibited vitamin D insufficiency. In a previous cross-sectional study, vitamin D deficiency did not suggest any impact on the bone microarchitecture. 12 Therefore, the coexistence of vitamin D insufficiency (41.1 nmol/L) and prolonged PTH exposure could not fully account for the marked skull and skeletal manifestations. Although the clinical presentation of PHPT has changed over the years due to early detection and treatment, the challenges of localizing and surgically removing elusive parathyroid adenomas have remained, as in our patient. 13 Abnormal parathyroid glands, variable anatomy and ectopic location of adenomas account for most of surgical failures. 14 In cases like ours, localization of the parathyroid gland is essential for preoperative planning. 15 Despite the availability of different advanced imaging modalities for targeted parathyroidectomies, there is still no clear consensus on the preferred imaging strategy. 16 Ultrasonography of the parathyroid glands has sufficient sensitivity (76 to 82%) to detect a single parathyroid enlargement in the neck, but has limited use in multiple gland hyperplasias, double adenomas, the presence of concomitant thyroid nodules and ectopic gland locations. 16 Therefore, ultrasonography alone was not sufficient in our patient given the presence of thyroid nodules, abnormal parathyroid glands, and the possible ectopic or unusual location. Selective venous sampling has been available since the 1980s. Since our patient initially underwent an unsuccessful parathyroidectomy, angiography of the neck vasculature and selective venous sampling may have been beneficial. Unfortunately, together with the dual phase sestamibi scan and sestamibi scan with SPECT imaging modalities, the location of the parathyroid glands remained undetermined. Unusual anatomical location, multiple glands, and coexistent thyroid nodules may reduce the sensitivities of all these imaging modalities. Inevitably, in such cases, medical treatment plays an essential role in lowering the serum calcium level and in bone protection. 17 Unfortunately, most pharmaceutical options have only been available in the last decade, with limited long-term outcome data. Sodium dihydrogen phosphate was the first oral therapy available before cinacalcet. 18 However, large and frequent dosing affected compliance, explaining the fluctuation of serum calcium levels in our patient. During the period when the patient was lost to follow up, her calcium control was suboptimal. Cinacalcet has only been available recently. It appears to stabilize and maintain normocalcemia over time, but has no effects on the bone. 19 Moreover, adverse gastrointestinal effects may affect the optimization of the cinacalcet dose. In addition, a low vitamin D status has been shown to be associated with specific features reflecting more severe biochemical hypercalcemia in postmenopausal women. 12,20 Supplementing with high dose vitamin D has been shown to be safe in PHPT cases in a randomized controlled trial (RCT), with improvement in vitamin D status and decrease in PTH levels without increasing serum calcium levels. 21 While there have been no RCT evaluating the combination of cinacalcet and vitamin D supplementation, our patient benefited from this combination with an optimal reduction in her serum calcium level without adverse effects. CONCLUSION Severe calvarial osteosclerosis compressing the posterior fossa and causing communicating hydrocephalus due to persistent PHPT is rare. To the best of our knowledge, rare florid diffuse bony changes in the posterior fossa causing brain tissue compression and communicating hydrocephalus have not been previously reported in the literature. An awareness of this rare skull manifestation causing posterior brain compression and communicating hydrocephalus is essential for early diagnosis and prompt intervention to prevent fatal posterior brain herniation. The available medical options offer both advantages and drawbacks. Moreover, the considerable variation in protracted PHPT presentations necessitates individualized management. Continual active surveillance for unusual complications is essential for early detection and prompt treatment. Ethical Consideration Patient consent was obtained before submission of the manuscript.
2018-12-10T22:48:27.273Z
2018-04-12T00:00:00.000
{ "year": 2018, "sha1": "7147454566f0ac37bf13e52955eaa4921c959e85", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15605/jafes.033.01.08", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b2d0c0f101a9dec584317909f95b9b4b44cf7f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17674921
pes2o/s2orc
v3-fos-license
Adrenal crisis secondary to bilateral adrenal haemorrhage after hemicolectomy Summary Adrenal haemorrhage is a rare cause of adrenal crisis, which requires rapid diagnosis, prompt initiation of parenteral hydrocortisone and haemodynamic monitoring to avoid hypotensive crises. We herein describe a case of bilateral adrenal haemorrhage after hemicolectomy in a 93-year-old female with high-grade colonic adenocarcinoma. This patient’s post-operative recovery was complicated by an acute hypotensive episode, hypoglycaemia and syncope, and subsequent computed tomography (CT) scan of the abdomen revealed bilateral adrenal haemorrhage. Given her labile blood pressure, intravenous hydrocortisone was commenced with rapid improvement of blood pressure, which had incompletely responded with fluids. A provisional diagnosis of hypocortisolism was made. Initial heparin-induced thrombocytopenic screen (HITTS) was positive, but platelet count and coagulation profile were both normal. The patient suffered a concurrent transient ischaemic attack with no neurological deficits. She was discharged on a reducing dose of oral steroids with normal serum cortisol levels at the time of discharge. She and her family were educated about lifelong steroids and the use of parenteral steroids should a hypoadrenal crisis eventuate. Learning points: Adrenal haemorrhage is a rare cause of hypoadrenalism, and thus requires prompt diagnosis and management to prevent death from primary adrenocortical insufficiency. Mechanisms of adrenal haemorrhage include reduced adrenal vascular bed capillary resistance, adrenal vein thrombosis, catecholamine-related increased adrenal blood flow and adrenal vein spasm. Standard diagnostic assessment is a non-contrast CT abdomen. Intravenous hydrocortisone and intravenous substitution of fluids are the initial management. A formal diagnosis of primary adrenal insufficiency should never delay treatment, but should be made afterwards. Summary Adrenal haemorrhage is a rare cause of adrenal crisis, which requires rapid diagnosis, prompt initiation of parenteral hydrocortisone and haemodynamic monitoring to avoid hypotensive crises. We herein describe a case of bilateral adrenal haemorrhage after hemicolectomy in a 93-year-old female with high-grade colonic adenocarcinoma. This patient's post-operative recovery was complicated by an acute hypotensive episode, hypoglycaemia and syncope, and subsequent computed tomography (CT) scan of the abdomen revealed bilateral adrenal haemorrhage. Given her labile blood pressure, intravenous hydrocortisone was commenced with rapid improvement of blood pressure, which had incompletely responded with fluids. A provisional diagnosis of hypocortisolism was made. Initial heparin-induced thrombocytopenic screen (HITTS) was positive, but platelet count and coagulation profile were both normal. The patient suffered a concurrent transient ischaemic attack with no neurological deficits. She was discharged on a reducing dose of oral steroids with normal serum cortisol levels at the time of discharge. She and her family were educated about lifelong steroids and the use of parenteral steroids should a hypoadrenal crisis eventuate. Learning points: • Adrenal haemorrhage is a rare cause of hypoadrenalism, and thus requires prompt diagnosis and management to prevent death from primary adrenocortical insufficiency. • Mechanisms of adrenal haemorrhage include reduced adrenal vascular bed capillary resistance, adrenal vein thrombosis, catecholamine-related increased adrenal blood flow and adrenal vein spasm. • Standard diagnostic assessment is a non-contrast CT abdomen. • Intravenous hydrocortisone and intravenous substitution of fluids are the initial management. • A formal diagnosis of primary adrenal insufficiency should never delay treatment, but should be made afterwards. Background Bilateral adrenal haemorrhage is a particularly uncommon occurrence especially in a patient with no previous adrenal pathology. It is most commonly seen in the first fortnight postoperatively (1,2). Rapid diagnosis should be made with a non-contrast CT abdomen scan. Delay in diagnosis can lead to adrenal crisis. The pathogenesis of adrenal haemorrhage is not yet fully elucidated but may include reduced adrenal vascular bed capillary resistance, adrenal vein thrombosis, catecholamine-related increased adrenal blood flow and adrenal vein spasm. Case presentation A 93-year-old female presented with two weeks of left upper quadrant abdominal pain associated with 10 kg unintentional weight loss. Pre-operative computed tomography (CT) scan demonstrated a splenic flexure mass with bowel obstruction. Her medical history included dementia, bladder transitional cell carcinoma without known metastases, laparoscopic cholecystectomy and bilateral knee replacement complicated by a provoked deep vein thrombosis. There was no other personal or family history of pro-thrombotic disorders. On examination, the patient was tender upon deep palpation in the left upper quadrant, but otherwise well and haemodynamically stable. A midline laparotomy demonstrated a large obstructing mass in the mid-transverse colon. The colon was mobilised from sigmoid colon to the terminal ileum. The ileocolic and middle colic vessels were ligated with preservation of the left colic artery. The colon was transected at the distal descending colon and terminal ileum and a side-to-side stapled anastomosis was performed. The patient was managed in the intensive care unit for 24 h. Histopathology confirmed a T3 highgrade adenocarcinoma measuring 14 × 6.5 cm, with 1 of 15 lymph nodes involved by carcinoma. Given her comorbidities and age, further treatment with chemotherapy was not pursued. Her post-operative recovery was complicated on the fourth day with increasing abdominal pain and an acute hypotensive episode (systolic blood pressure 60 mmHg), hypoglycaemia of 3.2 mmol/L and syncope, which initially responded to fluid resuscitation, but recurred two hours later. Haemoglobin dropped from 96 g/L to 89 g/L. An urgent CT scan to exclude post-operative haemorrhage was performed, which demonstrated acute bilateral adrenal haemorrhage with the right adrenal gland measuring 42 × 33 mm and the left adrenal gland measuring 36 × 30 mm (Figs 1 and 2). Review of previous imaging of the adrenal glands demonstrated no underlying pathology; size 28 × 6 mm (right) and 25 × 7 mm (left) (Fig. 3). There was no evidence of haemorrhage at the operative site. Investigation A serum cortisol level added to bloods performed 6 h before the syncopal episode at 12 pm was 273 nmol/L (0600 h) (normal range: 100-535), which is inappropriately low for a severely ill female after abdominal surgery. A provisional diagnosis of hypocortisolism was made due to the clinical presentation, bilateral adrenal haemorrhage, hyponatraemia, inappropriately low cortisol before the acute episode and response of ACTH after glucocorticoid replacement. Unfortunately, at the time, a short Synacthen test could not be performed due to her clinical instability and comorbidities. The hyperpigmentation characteristic of primary adrenal insufficiency (2) was not seen due to the acute onset. To investigate the cause of the haemorrhage, a heparin-induced thrombotic thrombocytopenic screen (HITTS) was organised as she had been on prophylactic subcutaneous heparin perioperatively. The heparinplatelet factor 4 antibody assay was positive, but her platelet count and coagulation profile were normal with pre-operative platelets 365, post-operative platelets 331 and post-haemorrhage platelets 280 (normal range: 150-400 × 10 3 platelets/microlitre). There was no evidence of venous thrombosis at the site of the adrenal haemorrhage. A second confirmatory functional test using heparin-induced platelet aggregation was negative. A prothrombotic screen was not performed due to interference from the haemorrhage and post-operative state. Seven days post-operatively, the patient suffered a transient ischaemic attack (TIA) resulting in 24 h of right upper limb paralysis with full recovery. Progress CT scan six days after haemorrhage demonstrated no size increase of the adrenal haemorrhage. Treatment Given her labile blood pressure, intravenous hydrocortisone 100 mg daily was commenced as initial intravenous fluid resuscitation was unable to maintain her blood pressure. Aspirin was recommenced ten days after haemorrhage due to TIA, and the patient continued on a reducing dose of oral steroids, discharging on daily cortisone acetate 37.5 mg mane and 25 mg at 1600 h, with fludrocortisone 0.1 mg daily for mineralocorticoid replacement. The patient had several further episodes of hypotension during her in-hospital admission when her hydrocortisone was reduced or withheld to perform repeat serum cortisol levels. She was given a presumptive diagnosis of hypocortisolism, and she and her family were educated on the importance of lifelong steroids including the provision of an emergency steroid card, management of sick days and use of parenteral steroids should she have a hypoadrenal crisis. Outcome and follow-up On discharge, the patient was haemodynamically stable. She was moved into a dementia-specific nursing home with follow-up booked in the outpatient endocrinology clinic. She continues on replacement doses of hydrocortisone orally and is followed up by her general practitioner with latest fasting morning cortisol 156 nmol/L and fasting ACTH 22.6 ng/L. Discussion Bilateral adrenal haemorrhage is an uncommon cause of hypocortisolism. Adrenal haemorrhage is most commonly seen in surgical patients in the first fourteen post-operative days (1,2). Factors include sepsis, heparin-induced thrombocytopenia, myocardial infarction, congestive heart failure and anti-phospholipid syndrome (2,3). However, it is more commonly associated with trauma, meningococcemia (Waterhouse-Friderichsen syndrome), and anticoagulation use (2). The underlying mechanism is not yet fully elucidated but may include ageing-related reduced capillary resistance in the adrenal vascular bed (3); adrenal vein thrombosis in hypercoagulable states (1) and, stress-induced catecholamine increase leading Axial computed tomography pre-operatively with arrows indicating the adrenal glands. to increased adrenal blood flow, adrenal vein spasm and platelet aggregation causing reperfusion and subsequent bleeding (4). The presentation is often non-specific, as a result of hypocortisolism and haemorrhage, including abdominal pain, nausea, vomiting, confusion and hypotension (5,9). The hyperpigmentation characteristic of primary adrenal insufficiency is initially not present when the insufficiency is acute. Consequently, bilateral adrenal haemorrhage causing acute primary adrenal insufficiency can be difficult to diagnose. Standard diagnostic assessment is a non-contrast CT, which would demonstrate hyperdense adrenal enlargement (6). Biochemical confirmations of hypoadrenalism includes low cortisol and elevated ACTH, with hyponatraemia and hyperkalaemia also present (7). In such situations, hyponatraemia should prompt physicians to consider adrenal insufficiency and surgeons to ensure adequate pre-operative sodium levels. In primary adrenal insufficiency, all layers of the adrenal cortex are affected, resulting in decreased production of glucocorticoids, mineralocorticoids and adrenally derived androgens (8). Adrenal insufficiency due to haemorrhage is initially managed with intravenous hydrocortisone and fluids (8). Long-term patients require lifelong steroid replacement and sick day management. This case demonstrated that adrenal crisis after adrenal haemorrhage requires rapid diagnosis, prompt initiation of parenteral hydrocortisone and haemodynamic monitoring to avoid hypotensive crises.
2017-11-03T02:33:16.648Z
2016-10-25T00:00:00.000
{ "year": 2016, "sha1": "1b1c217a73b8824e6eadb91913bc3879a3532f8f", "oa_license": "CCBYNCND", "oa_url": "https://edm.bioscientifica.com/downloadpdf/journals/edm/2016/1/EDM16-0048.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "250db7983fa7a042e2f91488a756604574127d65", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
51941452
pes2o/s2orc
v3-fos-license
High-Carbohydrate Diets and Food Patterns and Their Associations with Metabolic Disease in the Korean Population Purpose Although an Asian diet is typically high in carbohydrate and low in fat, there has been a steady increase in the rate of cardiometabolic disease in Asian countries over the past decade. We evaluated food patterns of a high-carbohydrate diet and examined their associations with metabolic disease. Materials and Methods Using data from the 2013–2015 Korean National Health and Nutrition Examination Survey, we included a total of 13106 subjects aged 20 years or older in this study. Diet was divided into seven groups according to the percentage of energy from carbohydrates. Food patterns were evaluated as individual servings per food group. Multivariate logistic regression was conducted to estimate odds ratios (OR) for metabolic disease. Results The proportions of men and women exceeding the recommended range of carbohydrate intake were 58.0% and 60.0%, respectively. A higher carbohydrate diet was associated with intake of low energy and saturated fats, with more grains and fruit, but less meat, fish, egg, bean (MFEB), and dairy consumption. Carbohydrate intake decreased by 3.0–3.4% per serving of MFEB and milk. In men, the highest carbohydrate group showed an OR of 1.35 [95% confidence interval (CI), 0.91 to 1.99] for metabolic syndrome, although this failed to show statistical significance. In women, the highest carbohydrate group had an OR of 1.38 (95% CI, 1.06 to 1.80) for a reduced level of high-density lipoprotein cholesterol. Conclusion This study suggests that a very-high-carbohydrate diet for the Korean population is attributable to lower consumption of MFEB and dairy products and is associated with several metabolic risk factors. The appropriate distribution of macronutrients for the prevention and management of metabolic disease should be explored. INTRODUCTION The role of dietary carbohydrate in metabolic disease has been reevaluated recently.According to Accurso, et al., 1 carbohydrate restriction improves glycemic control and reduces insulin fluctuations, which improves all of the symptoms of metabolic syndrome.A national U.S. survey reported that high carbohydrate intake was associated with an increased risk of metabolic syndrom. 2 Indeed, high carbohydrate intake is associated with increased risks of metabolic syndrom in Korean adults 3 and type 2 diabetes and coronary heart disease in Chinese adults. 4,5s dietary carbohydrate contributes more than 50% of daily energy intake, several aspects of carbohydrate nutrition, such as quality and food source, influence the risk of metabolic disease.0][11] Dietary glycemic index, an indicator of carbohydrate quality, has also been explored in relation to metabolic disease.A meta-analysis of cohort studies reported that a higher dietary glycemic index or load was associated with increased risks of type 2 diabetes 12 or other diseases. 13The beneficial effects of low-glycemic index diets are comparable to those of high intake of whole grains and fiber. 14ast Asian diets are typically rice based and comprise abundant plant-based foods, particularly white rice; consumption of whole grains is relatively low, and the dietary glycemic load is relatively high.Among Americans, the recommended proportion of carbohydrates in the diet is 45-65%; the average carbohydrate intake is around 49.5% for adults aged 20−74 years according to the National Health and Nutrition Examination Survey (NHANES) 2009−2010. 15By contrast, among Koreans the recommended proportion of carbohydrate in the diet is 55− 65%; the average intake is 64.1% for men and 66.8% for women. 16herefore, more than half of the Korean population obtains more than 65% of its energy requirement from carbohydrates.Although Asian diets have considerably higher carbohydrate levels than Western diets, the term high carbohydrate diet tends to be used without definition. As carbohydrates are the foundation of the Korean daily diet, high-carbohydrate diets should be characterized in terms of component food groups and associations with metabolic disease, which we aimed to do in the present study. Data source and participants This study was based on data from the sixth (2013−2015) Korea National Health and Nutrition Examination Survey (KNHANES).The KNHANES is a cross-sectional nationwide survey conducted by the Korea Centers for Disease Control and Prevention that uses a stratified, multistage probability sampling method.This survey comprised three parts: a health-related questionnaire, clinical examinations, and nutrition surveys.A detailed description is provided elsewhere. 17f the 22948 subjects who participated in the sixth KNHANES, we excluded those who were under 20 years of age (n=5168), and sequentially had incomplete data on 24-h recall (n=1920), had anthropometric data (n=942), reported extreme energy intake (<500 kcal/d or >5000 kcal/d; n=264), and were pregnant or breastfeeding (n=1548).A total of 13106 subjects were included in the data analyses.This study was approved by the Korea Centers for Disease Control and Prevention Institutional Review Board, and written informed consent was obtained from all participants. Dietary intake Dietary intake for each subject was assessed by 1-day 24-h recall.These data were collected by trained interviewers during the nutrition survey focusing on meals on weekends and week-days.Energy intake was evaluated as total energy intake (kcal) and percentage of estimated energy requirements (EER).Intakes of macronutrient and fatty acids were assessed as proportions of energy intake.Subjects were divided into seven groups (in increments of 5%) from <55% to >80% of energy from carbohydrate. Using 18 common food groups in the Korean nutrient database, we classified food items into the following five food groups based on the Korean Food Guidance System: 18 grains (300 kcal/ serving for staples or 100 kcal/serving for side dishes); meat, fish, eggs, and beans (MFEB; 100 kcal/serving); vegetables (15 kcal/serving); fruit (50 kcal/serving); and milk and dairy products (125 kcal/serving).The Korean Food Guidance System recommends specific numbers of servings for the five major food groups according to sex and age.We calculated energy intakes for the five food groups for each subject and converted them into numbers of servings by dividing by the calorie equivalent of one serving.For grains, 100 kcal was used for one serving to enable comparison with other food guidance systems.Compliance with the recommended number of servings was evaluated as the number of servings consumed divided by the recommended number of servings multiplied by 100. Sociodemographic, health behavior, anthropometric, and biochemical variables Sociodemographic characteristics and health behaviors were evaluated using a structured questionnaire.Education level was categorized as elementary or less, junior high school, high school, and college or more.Household income was classified into quartiles based on monthly family income, and residence was divided into urban or rural area.Alcohol drinking was evaluated by inquiring how frequently the subject had drunk in the past 12 months and how many drinks had been consumed.Any participant who responded that he or she had consumed more than one drink per month at least once during the past year was defined as a current drinker.Smoking status was determined by inquiring whether the participant had smoked more than 100 cigarettes in his or her lifetime and currently smoked daily or occasionally.Respondents who reported "yes" were categorized as current smokers.Physical activity was defined as walking at least 30 min a day more than 5 days per week. During the KNHANES clinical examination, weight, height, waist circumference, and blood pressure were measured in a standardized fashion by trained technicians.Blood pressure was measured three times, and the mean of the latter two values was used.Body mass index (BMI) was calculated as the ratio of weight in kilograms and height in meters squared (kg/ m 2 ).Blood samples were collected from participants who had fasted for at least 8 hours to determine triglyceride (TG), highdensity lipoprotein (HDL) cholesterol, glucose, and total cholesterol levels.Lipid parameters were evaluated using a Hitachi Automatic Analyzer (Hitachi, Japan).The blood collection Definition of metabolic disease The metabolic diseases evaluated in this study included obesity, metabolic syndrom and its components, type 2 diabetes, hypercholesterolemia, hypertriglyceridemia, and atherogenic dyslipidemia.Obesity was defined as a BMI ≥25 kg/m 2 according to the World Health Organization (WHO) Western Pacific region classification.Metabolic syndrom was defined as having three or more of the following abnormalities based on the National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP III) 20 with a modified waist circumference cutoff for Korean adults: 21 1) abdominal adiposity (≥90 cm for men, ≥85 cm for women), 2) elevated TG level (150 mg/dL) or current use of anti-dyslipidemia medication, 3) reduced HDL cholesterol (<40 mg/dL in men, <50 mg/dL in women), 4) elevated blood pressure (systolic blood pressure ≥130 mm Hg or diastolic blood pressure ≥85 mm Hg) or current use of antihypertensive medication, or 5) elevated fasting glucose (≥100 mg/dL) or current use of anti-diabetic medication.Type 2 diabetes was defined as a fasting glucose level ≥126 mg/dL, a history of physician diagnosis, or use of oral hypoglycemic agents or insulin.Hypercholesterolemia was diagnosed as a cholesterol level ≥240 mg/dL or use of cholesterol-lowering medication, and hypertriglyceridemia was defined as a TG level ≥200 mg/dL.Atherogenic dyslipidemia was defined as a reduced HDL cholesterol level (<40 mg/dL for men, <50 mg/dL for women) and elevated TG level (≥150 mg/dL) or current use of anti-dyslipidemia medication. 22 Statistical analysis All statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA).To account for the complex sampling design, we applied appropriate sampling weights to all analyses using survey procedures in SAS.General characteristics, such as age, education, household income, residence, drinking, smoking, and walking, are expressed as frequencies and percentages, and the significance of differences between the sexes was evaluated by the chi-square test.Energy and macronutrient intakes are expressed as means with their standard errors, and the significance of differences between the sexes was evaluated by t test.Dietary variables, such as percentages of energy from macronutrients and fatty acids and consumption of food groups (number of servings and compliance with the recommended number of servings), are expressed as means and standard errors. To investigate the associations between intake of macronutrients and consumption of the five food groups, we constructed multiple linear regression models.Beta (standardized regression coefficient) and R 2 (coefficient of multiple determinations) were used as the statistical parameters.Multivariate logistic regression models were used to evaluate the overall trends in the odds ratios (ORs) and 95% confidence intervals (95% CI) for metabolic diseases according to dietary carbohydrate intake after adjustment for covariates.The covariates were age, education, household income, residence, smoking, drinking, walking, BMI (except for the model with obesity and increased waist circumference), and energy intake; covariates were controlled for in all models.Tests for linear trends for metabolic disease by carbohydrate group were conducted using the median values of carbohydrate intake in each group as a continuous variable.A p value <0.05 was considered to indicate statistical significance. Characteristics of the study population The sociodemographic characteristics, health behaviors, and macronutrient intakes of the participants are summarized in Table 1.Both men and women were more likely to be 30−49 years old and live in an urban area.Men were more likely than women to have a higher education level and to walk.Women were less likely than men to be current drinkers and smokers.Men also had higher intakes of energy and macronutrients than women, with the exception of carbohydrates. Distribution of dietary carbohydrate intake by sex The distribution of subjects according to dietary carbohydrate intake by sex is presented in Fig. 1. Of the groups categorized according to the proportion of energy from carbohydrate intake, the 70−75% group was most frequent among both men and women.The proportions of energy from carbohydrates of 26.0% for men and 25.2% for women were in the recommended range, whereas 58.0% of men and 60.0% of women exceeded the recommended proportion of energy from carbohydrates (65%).As age group is an important factor, the distribution of dietary carbohydrate intake according to age group by sex is presented in Supplementary Fig. 1 (only online).The proportion of subjects who consumed <55% of energy from carbohydrate intake was highest in 20−29 year group, while those who consumed >80% were highest in the 75 years or more group for both men and women. Energy, macronutrient, and fatty acid intake by carbohydrate intake Energy and macronutrient intakes are presented in Table 2. Energy intake decreased across the carbohydrate groups.When we evaluated energy intake using age, sex-specific EER, the lowest carbohydrate group (<55% group) showed 118.5% in men and 107.4% in women, whereas the highest carbohydrate group (>80% group) showed 82.7% in men and 89.4% in women.In the case of the 70−75% carbohydrate group, which was the most prevalent in this population, %EER was 95.0% in men and 89.9% in women, and the percentage of energy from fat was 14.3% in men and 14.7% in women.Fat intake decreased across the carbohydrate groups in the same pattern as energy intake.Whereas saturated fat intake was 9.7% in men and 9.8% in women in the lowest carbohydrate group (<55% group), it was 1.8% in men and 1.6% in women in the highest carbohydrate group (>80% group). Food group consumption according to carbohydrate intake Food group consumption as a percentage of the recommended serving is presented in Fig. 2. For men, food group consumption ranged from 79% to 117% of the recommendation for grains and from 77% to 102% for vegetables.However, consumption of MFEB was 170% of the recommendation in the lowest carbohydrate group (<55%), compared to 21% in the highest carbohydrate group (>80%).Milk consumption was low in all groups: 51% in the lowest group (<55%) and only 13% in the highest group (>80%).Women showed a similar pattern, with the exception of fruit consumption.Fruit consumption was 74% of the recommended serving in the lowest carbohydrate group (<55%), but 164% in the highest carbohydrate group (>80%). Relation of food group consumption to carbohydrate intake The results of multiple linear regression analyses with the carbohydrate intake as the dependent variable and consumption of the five food groups as independent variables are shown in Table 3.The estimated regression coefficients indicate the influence on carbohydrate intake of a single serving size increase in food group consumption.Decreases in carbohydrate intake were 2.5-and 2.1-fold greater for an increase in servings of MFEB, compared to decreasing one serving of grain intake, in men and women, respectively. Associations between dietary carbohydrate intake and metabolic disease The associations between dietary carbohydrate intake and metabolic diseases are shown in DISCUSSION Our findings demonstrated that Korean adults consume a very-high-carbohydrate diet and that a higher carbohydrate intake is associated with intake of low energy and saturated fat and little consumption of MFEB and milk.Moreover, veryhigh-carbohydrate diets comprising more than 80% of energy intake were significantly associated with increased risks of elevated TG level in men and an increased risk of a reduced HDL cholesterol level in women. A typical Korean diet is based on white rice with plenty of plant-based foods (i.e., a high-carbohydrate and low-fat diet).As white rice is a core food, its contribution to total grains is considerable, despite an overall decrease in the consumption of white rice in Korea over the past decade.In this study, in the highest carbohydrate group (>80%), white rice represented about 65% of grain consumption among both men and women (Supplementary Fig. 2, only online).This high dependence on white rice results in a very-low-fat diet with low consumption of whole grains.Approximately 60% of subjects in a Korean national survey had no whole grain consumption on the survey day, and the proportion of whole grains among total grains was as low as 5%. 236][27] Furthermore, a systematic review showed that a low-fat diet, defined as less than 30% of energy from fat, had adverse effects on TG levels in the long term, despite a favorable impact on total and LDL cholesterol levels. 28This finding is consistent with studies in Asian populations, including this work.A study in a Chinese population reported that high car-bohydrate intake from starchy foods was positively associated with high TG and low HDL cholesterol levels. 29A study in a Japanese population reported that a high glycemic load was positively correlated with elevated fasting TG and reduced HDL cholesterol levels. 30A recent study of Koreans reported that adults who consumed a very-low-fat diet, defined as less than 15% of energy intake, had an increased risk of metabolic syndrom: 31 The nutrient adequacy of a very-low-fat diet may vary depending on the food intake pattern; intake of adequate protein and complex carbohydrate is required. 24MFEB is the primary protein-containing foods, and a recent meta-analysis study showed that soy protein supplementation reduced cardiometabolic markers. 32lthough meat consumption has increased in Asian countries in the past decade, it is still substantially lower in Asian countries than in the United States. 33Red meat consumption is linked to an increased risk of mortality in Western populations. 34,35However, a pooled analysis of prospective cohort studies in Asian populations reported that red meat and poultry consumption were inversely associated with mortality in both men and women. 33Japanese cohort studies reported that the intake of animal products had a protective effect on intracerebral hemorrhage 36 and cerebral infraction, 37 and an observational study in Japan found no association between an animal food dietary pattern and an abnormality in glucose tolerance. 38In a Chinese cohort, poultry intake was inversely associated with all cardiovascular disease mortality in men, 39 and total meat and fish intake was not associated with a risk of colorectal cancer. 40A semi-Western diet, which is characterized by relatively high intakes of meat, poultry, eggs, and alcohol, was associated with a low risk of HDL cholesterol in KNHANES 2007−2008. 41This phenomenon is likely due to the inadequate meat intake typical of Asian populations.The effect of reducing carbohydrate intake is more than two-fold in increasing one serving of MFEB rather than decreasing one serving of grain.Therefore, MFEB consumption should be the focus of efforts Grain, one serving=100 kcal; MFEB, one serving=100 kcal; vegetables, one serving=15 kcal; fruit, one serving=50 kcal; milk and dairy products, one serving=125 kcal.All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey using PROC SURVEY in SAS.Linear regression model was used to predict dietary carbohydrate intake with five major food groups consumption after adjusted for age, education, household income, energy intake.β, standardized regression coefficient; adj R 2 , adjusted coefficient of multiple determinations.to reduce dietary carbohydrate levels. In this study, we found a positive association of dietary carbohydrate with metabolic disease.As our data was based on the prevalence of metabolic disease, further studies are necessary to confirm our findings in a longitudinal study.A recent study for Korean adults reported that carbohydrate composition, in the range of 67−70%, showed significantly reduced OR for metabolic syndrome incidence. 42However, they did not stratify the association by sex with a short duration of 2 years and different dietary assessment that makes it difficult to compare with our results.Further studies are still necessary for Koreans to elucidate the role of dietary carbohydrate in the progression of metabolic disease. Our study has several limitations.The data were from a crosssectional study, which precludes identification of causal relationships between carbohydrate intake groups and metabolic diseases.In addition, a single 24-h recall is not representative of typical food intake.However, we calculated the servings of each food group based on a food guidance system because there was no serving database for individual foods, which may have resulted in error.Also, we included subjects who had diagnosed or treated metabolic disease, which might affect the findings in this study.However, when we examined the relationship of carbohydrate intake with food groups or metabolic risk factors excluding those subjects, there were similar trends with the same significance, although it was attenuated.Lastly, we cannot say that the model to predict dietary carbohydrate with major five food group consumption fit very well.However, this is the first study to provide evidence on how to reduce dietary carbohydrate in practical ways in relation to food groups and portions thereof.Further studies to confirm our results would be necessary.Despite these limitations, this study would contribute to understanding food patterns of high-carbohydrate diets quantitatively. In conclusion, the very-high-carbohydrate diet typical of the Korean population is attributable to lower consumption of MFEB and dairy products and is associated with several metabolic risk factors.The optimum macronutrient intakes and appropriate food patterns for each country should be explored to enhance the prevention and management of metabolic diseases. Fig. 2 . Fig.2.Food group consumption (percentage of recommended servings) according to dietary carbohydrate intake based on the Korean Food Guidance System.% servings=the number of servings consumed/the recommended number of servings×100.MFEB, meat, fish, eggs, and beans.Grains Table 1 . Characteristics of Participants according to Sex All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey using PROC SURVEY in SAS.*p values were determined by t test for continuous variables and the chisquare test for categorical variables. Table 4 . Men in the highest carbohydrate group showed an OR of 1.35 (95% CI, 0.91 to 1.99) for metabolic syndrom, which failed to obtain statistical significance.Of the components of metabolic syndrom, men in the highest carbohydrate group (>80%) showed an OR of 1.41 (95% CI, 1.03 to 1.92) for elevated TG level, compared to those in the lowest carbohydrate group (<55%).Among wom- Table 2 . Energy, Macronutrient, and Fatty Acid Intake according to Dietary Carbohydrate Intake by Sex EER, estimated energy requirement.All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey using PROC SURVEY in SAS.Values are means (standard errors).*Energy intake was evaluated as the percentage of age, sex-specific EER. Table 3 . Multiple Linear Regression Results for Carbohydrate Intake with Consumption of the Five Food Groups according to Sex Table 4 . Multivariable-Adjusted ORs and 95% CIs for Metabolic Diseases according to Dietary Carbohydrate Intake by Sex CI, confidence interval; BMI, body mass index; SBP, systolic blood pressure; DBP, diastolic blood pressure; HDL, high-density lipoprotein; TG, triglyceride.All analyses accounted for the complex sampling design effect and appropriate sampling weights of the national survey using PROC SURVEY in SAS.Multivariate adjusted logistic regression was used to estimate ORs (95% CIs) and p values for trends after adjustment for age, education, household income, residence, current smoking, current alcohol drinking, physical activity, BMI (except for the model with obesity and increased waist circumference), and energy intake.
2018-08-14T20:08:23.012Z
2018-08-07T00:00:00.000
{ "year": 2018, "sha1": "29639dfc491b5328a32a0317b6cf019e9052d393", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3349/ymj.2018.59.7.834", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "29639dfc491b5328a32a0317b6cf019e9052d393", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255812153
pes2o/s2orc
v3-fos-license
The whole-transcriptome landscape of muscle and adipose tissues reveals the ceRNA regulation network related to intramuscular fat deposition in yak The Intramuscular fat (IMF) content in meat products, which is positively correlated with meat quality, is an important trait considered by consumers. The regulation of IMF deposition is species specific. However, the IMF-deposition-related mRNA and non-coding RNA and their regulatory network in yak (Bos grunniens) remain unknown. High-throughput sequencing technology provides a powerful approach for analyzing the association between transcriptome-related differences and specific traits in animals. Thus, the whole transcriptomes of yak muscle and adipose tissues were screened and analyzed to elucidate the IMF deposition-related genes. The muscle tissues were used for IMF content measurements. Significant differences were observed between the 0.5- and 2.5-year-old yaks. Several mRNAs, miRNAs, lncRNAs and circRNAs were generally expressed in both muscle and adipose tissues. Between the 0.5- and 2.5-year-old yaks, 149 mRNAs, 62 miRNAs, 4 lncRNAs, and 223 circRNAs were differentially expressed in muscle tissue, and 72 mRNAs, 15 miRNAs, 9 lncRNAs, and 211 circRNAs were differentially expressed in adipose tissue. KEGG annotation revelved that these differentially expressed genes were related to pathways that maintain normal biological functions of muscle and adipose tissues. Moreover, 16 mRNAs, 5 miRNAs, 3 lncRNAs, and 5 circRNAs were co-differentially expressed in both types of tissue. We suspected that these co-differentially expressed genes were involved in IMF-deposition in the yak. Additionally, LPL, ACADL, SCD, and FASN, which were previously shown to be associated with the IMF content, were identified in the competing endogenous RNA (ceRNA) regulatory network that was constructed on the basis of the IMF deposition-related genes. Three ceRNA subnetworks also revealed that TCONS-00016416 and its target SIRT1 “talk” to each other through the same miR-381-y and miR-208 response elements, whereas TCONS-00061798 and its target PRKCA, and TCONS-00084092 and its target LPL “talk” to each other through miR-122-x and miR-499-y response elements, respectively. Taken together, our results reveal the potential mRNA and noncoding RNAs involved in IMF deposition in the yak, providing a useful resource for further research on IMF deposition in this animal species. Background The intramuscular fat (IMF) content in livestock is positively correlated with various aspects of meat quality, such as tenderness, flavor, and juiciness, and as such is one of the key traits related to consumer preference. The IMF refers to the sum of phospholipid, triglyceride, and cholesterol contents within muscles, and is considered as the last type of fat developed during fat deposition. Research has revealed that the IMF content is determined both by hypertrophy and hyperplasia of adipocytes during the development of livestock species [1]. The factors that related to the variation of IMF content in livestock include the species, breed, muscle types, gender, age, and nutrition level [2,3]. Mechanisms such as nutrient regulation ultimately affect the deposition of IMF by affecting the transcription, mRNA expression, protein expression, and modification of genes. Studies have found the heritability of the IMF content to range from 0.47 to 0.53 [4][5][6]. However, because the IMF content can only be measured after animal slaughter, since there are no instruments that can measure it in vivo, it is difficult to improve this trait by the traditional selection methods. Hence, molecular breeding based on the mechanism of IMF metabolism is a key method used for IMF content improvement [7]. However, no effective marker for IMF content selection practices in the livestock has yet been found. The yak (Bos grunniens), one of the ruminants that live in the Qinghai-Tibet Plateau and adjacent areas, is well adapted to the high-altitude environments. Compared with cattle meat, yak meat has higher contents of protein and mineral substance, but a lower content of fats, especially IMF [8]. A poor IMF deposition ability is a common phenomenon in yaks, and there are no known populations or breeds of yaks with an excellent IMF deposition ability. Therefore, to improve this ability of yaks fundamentally, the key genes affecting the molecular genetic mechanism of IMF deposition in this species need to be found. The IMF content depends mainly on the size and number of intramuscular adipocytes and muscle growth rate [2], indicating that muscle cells and adipocytes interact with each other during IMF deposition. Both adipocytes and myocytes originate from mesenchymal stem cells [9,10]. Moreover, the muscles and adipose tissue are considered as major endocrine organ that secrete numerous proteins, named myokines and adipokines, respectively [11,12]. Myostatin, which is secreted from myocytes, decreases the IMF content by inhibiting the differentiation of preadipocytes [13]. It was reported that the coculture of C2C12 skeletal muscle cells with 3 T3-L1 adipocytes increased the gene expression of peroxisome proliferator-activating receptor gamma (PPARγ), fatty acid synthase (FASN), and fatty acid-binding protein (FABP4) [14], which interestingly are genes that play a key roles in fatty acid metabolism and have also been demonstrated to be related to IMF deposition [15][16][17]. These findings indicate that muscle cells are involved in the regulation of lipid-related factors in adipocytes and may participate in the IMF deposition processes. Many recent studies on the mechanism of IMF deposition in cattle have already revealed some of the genes that are involved in the IMF deposition-regulating pathway [18]. However, the genes associated with IMF deposition in yaks and their related molecular mechanisms remain unknown. The one-by-one identification of the potential regulatory genes in the yak would undoubtedly be like trying to find a needle in a haystack. Moreover, previous studies have showed that the IMF content varies even between breeds of the same species [19] and between different development stages [20]. Previous studies showed that the IMF content of longissimus dorsi (LD) in 0.5-year-old yaks were significantly lower than that in adult yaks [21], but was similar among adult yaks of different ages, which is unlike the situation in cattle where the IMF content of this same muscle increase with advancing age. Taken together, these results indicate that the regulation of MF deposition is species specific. The yak used in this study are part of a dual-purpose (i.e., indigenous meat-dairy) population that is distributed in Changdu city, Tibet province, China. After longterm interbreeding, the yaks have attained consistency in appearance, reproductive and production performances. Until now, a global analysis of the molecular mechanism of IMF deposition in yak has not been previously performed. Therefore, the elucidation of the differences in the whole transcriptomes related to IMF deposition at different development stages of the yak is essential for interpreting the function of the DEGs. In this study, the IMF contents in 0.5-, 2.5-, 4.5-, and 7.5-year-old yaks were determined, and the whole-transcriptome profiles of the LD muscle and its adjacent intermuscular adipose tissues (AA) in the 0.5-and 2.5-year-old yaks were obtained to compare the DEGs in these two tissues between the two developmental stages. Then, the co-DEGs were obtained and considered as the DEGs involved in IMF deposition. Using clustering analysis and advanced visualization techniques, several genes and pathways involved in adipogenesis and lipogenesis were revealed. Finally, we constructed a comprehensive competing endogenous RNA (ceRNA) network on the basis of the co-DEGs between the LD and AA tissues to highlight the genes that are most likely to be involved with the IMF trait in yaks. Results Intramuscular fat contents of the longissimus dorsi muscle in yaks of different ages The IMF content of the LD increased along with the development of the yaks from 0.5 to 7.5 years of age. Compared with the IMF content in the 0.5-year-old yaks, that in the 2.5-year-old animals was significantly higher (p < 0.05), and this age group also showed the fastest LD fat deposition of the yaks. However, the IMF content increased slightly from the 2.5-year-old to the 7.5-year-old animals ( Fig. 1a and b), Overview of RNA sequencing To assess the genes involved in IMF deposition, LD and AA tissues were collected from the 0.5-and 2.5-year-old yaks for the whole-transcriptome profiling of all mRNAs and noncoding RNAs (long noncoding RNAs (lncRNAs), circular RNAs (circRNAs), and microRNAs (miRNAs)) via high-throughput sequencing. For the RNAsequencing (RNA-Seq) libraries, an average of 95.62 million clean reads were obtained from the 12 samples tested, and 87.91-90.13% of these reads were uniquely aligned to the reference genome Ensemble BosGru v2.0. All 12 samples had at least 94.80% reads equal to or exceeding Q30 (Table S1). In addition, for the small RNA-Seq libraries, an average of 10.80 million clean reads were obtained. An average of 9.59 million known miRNA reads, 1.57 thousand novel miRNA reads, and 28.21 thousand unannotated reads were obtained after a series of analyses (Table S2). In total, 45,366 mRNAs (Table S3) were obtained, of which 84.88% were generally expressed in both LD and AA tissues. Moreover, 22,596 and 22,770 known and novel mRNAs were identified, respectively, which included 2737 LD tissue-specific and 4122 AA tissuespecific mRNAs (Table S4). Additionally, 4142 lncRNAs were obtained, of which 3600 and 3761 were expressed in the LD and AA tissues, respectively, and 77.72% were consistently expressed in both types of tissues. Of these lncRNAs, 383 were LD tissue specific and 541 were AA tissue specific (Table S4). Furthermore, 1444 miRNAs were identified, of which 1290 were known and 154 were novel (Table S3). Finally, 39,853 circRNAs were identified in the yak (Table S3), of which 17,211 and 9616 were found to be LD tissue specific and AA tissue specific, respectively (Table S4). Total lncRNAs and differentially expressed lncRNAs during intramuscular fat deposition To reveal the potential functions of the 4142 identified lncRNAs in IMF deposition, three independent algorithms-antisense (mRNA sequence complementarity), cis (genomic location), and trans (expression correlation) were performed to predict the target genes of the lncRNAs. In total, 3963 target genes were predicted, of which 332 were targets of 421 antisense lncRNAs, 1089 were targets of 826 cis-acting lncRNAs, and 3214 (1487) showed the most positively (negatively) correlated coexpressed with 4142 trans-acting lncRNAs (Table S6). KEGG analysis revealed that the antisense lncRNAs were significantly enriched for glycolysis and gluconeogenesis pathways (Table S6), and the trans-acting lncRNAs were significantly annotated to pathways of lipid and carbohydrate metabolism, such as the steroid hormone biosynthesis, ascorbate and aldarate metabolism, and starch and sucrose metabolism pathways (Table S6). Additionally, even though they were not significantly enriched in any pathways, the cis acting lncRNAs were involved in the transforming growth factor-beta (TGF-β) signaling and Hedgebog signaling pathways, which play key roles in lipid metabolism (Table S6). Four differentially expressed lncRNAs (DELs) (2 upregulated and 2 down-regulated) and 9 DELs (8 up- regulated and 1 down-regulated) were identified in the LD and AA tissues, respectively ( Table 2). As a preliminary exploration of the functional implications of the DELs across genomes, we investigated whether lncRNAs were co-regulated with the DEGs during IMFdeposition. Interestingly, in both LD and AA tissues, we observed that the antisense lncRNA TCONS_00084092 targeted LPL as its differentially expressed co-target gene, whereas the two trans-acting lncRNAs TCONS_ 00016416 and TCONS_00061798 targeted SIRT1 and PRKCA, respectively, as their differentially expressed cotarget genes (Tables 2 and 3). Differentially expresgessed miRNAs and circRNAs during intramuscular fat deposition In total, 62 differentially expressed miRNAs (DEMs) were obtained in LD tissues, where 30 were upregulated and 32 were downregulated (Fig. 4a, Table S7). KEGG pathway analysis revealed that these DEMs were significantly enriched in 94 pathways, some of which were important for lipid biosynthesis; for example, the PI3K-Akt signaling, MAPK signaling, AMPK signaling, fatty acid metabolism, and biosynthesis of unsaturated fatty acids pathways. Moreover, 15 DEMs were obtained in the AA tissues, of which 6 were upregulated and 9 were downregulated (Fig. 4b, Table S7). The targets of these 15 DEMs were significantly enriched in 63 pathways, some of which were related to lipid metabolism; for example, the Hippo signaling, MAPK signaling, AMPK signaling, and PI3K-Akt signaling pathways (Table S7). Furthermore, two miRNAs (miR-122-x and miR-381-y) were simultaneously downregulated in both the AA and LD tissues, and one novel miRNA (novel-m0085-5p) was contemporaneously upregulated in both tissues. Two miRNAs (miR-208-y and miR-499-y) exhibited opposite expression trends, being upregulated in LD tissue but downregulated in AA tissue (Table 4). We also identified 223 differentially expressed cir-cRNAs (DECs; 125 upregulated and 98 downregulated) in the LD tissue (Fig. 4c, Table S8). KEGG pathway analysis revealed that these DECs were significantly enriched in the cyclic guanosine monophosphate (cGMP)-protein kinase G (PKG) signaling pathway, and involved in pathways related to lipid and carbohydrate metabolism; for example, the propanoate and pyruvate metabolism, fatty acid biosynthesis, Hippo signaling, and MAPK signaling pathways (Table S8). Moreover, 211 DECs (91 upregulated and 120 downregulated) were obtained in the AA tissues (Fig. 4d, Table S8), where function annotation results revealed that they were enriched in pathways related to lipid metabolism, such as the AMPK signaling, fatty acid biosynthesis, and fatty acid metabolism (Table S8). Of these DECs in the LD and AA tissues, circRNA000230 and circRNA053707 were found to be simultaneously downregulated, whereas cir-cRNA008790 and circRNA040844 were simultaneously upregulated. In addition, circRNA054960 was upregulated in the LD tissue but downregulated in the AA tissue ( Table 5, Table S8). Construction of the ceRNA coregulatory network It has been shown that mRNAs, lncRNAs, and circRNAs may act as ceRNAs, which regulate gene function via miRNA in various processes [22,23], suggesting that ceRNAs and their miRNAs may be coregulated in IMF deposition. On the basis of the data of the codifferentially expressed mRNA, lncRNA, circRNA, and miRNA transcripts, we obtained the mRNA-miRNA, lncRNA-miRNA, and circRNA-miRNA pairs, combined them with the lncRNA-mRNA pairs, and then constructed the integrated ceRNA network. The constructed network contained 10 DEGs, 5 DEMs, 5 DECs, 3 DELs, and 29 relationships (Fig. 5). Within the network, it was found that both TCONS-00016416 and its target SIRT1 could be targeted by miR-381-y. the same results were observed for the miR-122-x-TCONS-00061798-PRKCA and miR-499-y-TCONS-00084092-LPL ceRNA subnetworks, suggesting that SIRT1, PRKCA and LPL may be the crucial genes mediated by noncoding RNAs for regulating IMF deposition. RT-qPCR validation of gene expression Validation of the RNA-seq results was carried out using the quantitative reverse-transcription polymerase chain reaction (RT-qPCR) for 3 DEGs (LPL, SIRT1 and PRKCA), 3 DEMs (miR-122-x, miR-381-y, and miR-499y), 2 DELs (TCONS-00016416,and TCONS-00084092), and 2 DECs (Circ_040844, and Circ_053707). The expression of these selected transcripts was significantly different in both the LD and AA tissues during yak development, with the expression patterns being highly consistent with those obtained by the RNA-Seq method 6 and 7). It's worth mentioning that the backsplice junction of circRNA were confirmed before the validation, as shown in Fig. 7, due to the circular structure, the circRNA is more resistant to digestion by RNase R treatment, and the back-splicing sites were verified by Sanger sequencing (Fig. 7d and e). The results indicated the high reproducibility and reliability of the gene expression profiles obtained in our study. Discussion As is widely known, the IMF content is one of the polygenic traits in animal that is an important determinant of meat quality characteristics. Given the importance of IMF on the economics of livestock production, the clarification of the molecular mechanisms underlying IMF deposition holds great significance. The associations of genomic markers with IMF deposition are not always consistent depending on the species or the breeds. To our best knowledge, this is the first study that has comprehensively analyzed the whole-transcriptome profiles related to yak meat characteristics. In this study, to identify genes that are related to IMF deposition in the yak, we first measured the IMF content in LD of 0.5-, 2.5-, 4.5-, and 7.5-year-old yaks, whereupon it was found that the IMF content was deposited quickly from 0.5 to 2.5 years. This prompted us to choose LD and AA tissues in these two developmental stages for the wholetranscriptome profile analysis. In total, 149 and 72 DEGs were identified during yak muscle and adipose tissue development, respectively. Of these, 16 DEGs were co-differentially expressed in both tissues, many of which had known functions in lipid metabolism. For instance, the lipogenesis genes ACACB, FASN, SCD, and are involved in the fatty-acyl-CoA biosynthetic process and play catalytic roles in fatty acid biosynthesis [24,25]. LPL is the principal enzyme that acts as a key factor in the hydrolysis of triacylglycerol and uptake of free fatty acids from the plasma. ACADL, one of the rate-limiting enzymes in fatty acid betaoxidation, has been identified as candidate function gene in IMF deposition in chicken [26]. Moreover, the expression dynamics of AUST2, ACADL, ELOVL7, FASN, ZNF41, and SCD during muscle and adipose development are in consistent with the increased trend of IMF deposition, whereas SIRT1 showed the opposite trend, indicating that the former genes act as positive regulators in IMF deposition whereas the latter gene acts as negative regulator in this process. Interestingly, XAB2, RGMB, SMAD1, PRKCA, MAP 4 K1, LPL, and HIF1A upregulated during muscle development, but downregulated during adipose tissue development, whereas ACACB and GNA12 showed the opposite expression trend. These consistent or inconsistent trend changes in expression trends may be related to the different function of these genes during the development of these two tissues, which are worthy of future analysis. As a major class of the noncoding RNAs, lncRNAs could act as key regulators in many biological and pathological processes via trans, cis, and antisense activities. PU.1 expression was found to be modulated by an antisense lncRNA that was transcribed from the PU.1 gene itself in immune-related cell lines [27] and preadipocytes [28]. The lncRNA Jpx acts both in trans and in cis to activate X specific transcripts expression in mouse embryonic cells [29]. Thus, the prediction of lncRNA target genes through these three independent algorithms together with the KEGG analysis would be useful for identifying which processes lncRNAs are involved in and further revealing their potential functions. Our results showed that lncRNAs obtained in this study may regulate pathways related to lipid or carbohydrate metabolism (e.g., steroid hormone biosynthesis, hedgebog signaling, and glycolysis and gluconeogenesis pathways) via trans, cis, and antisense activities. Previously studies have shown that the Hedgebog pathway enables cells to sense and respond to Hedgebog ligands, which are covalently modified by cholesterol, indicating that there is a connection between Hedgehog signaling and lipid metabolism [30]. Furthermore, our results identified 4 DELs and 9 DELs during muscle and adipose development, respectively, and found that 3 co-DELs may regulate 3 lipid-related genes (LPL, SIRT1 and PRKCA) via antisense and trans activities. As an enzyme, LPL is involved in fatty acid catabolism, and its expression level is positively associated with IMF content [31]. Moreover, several lines of evidence have demonstrated that SIRT1(a key metabolic/energy sensor) plays an important role in regulating lipid metabolism by deacetylating some transcriptional regulators and co-activators, for instance, carbohydrate response element binding protein (ChREBP), sterol regulatory element binding protein-1c (SREBP-1c), PPARα, nuclear factor-κB (NF-κB), ect [32].. On the basis of these results, we suspected that one of the main roles of these 3 lncRNAs is to regulate the IMF deposition in yak, and further highlighting of their detailed mechanisms in this process would be fertile ground for future investigation. The results on the DEMs and DECs during muscle and adipose tissue development indicated that they were significantly enriched in lipid-related pathways, for instance, MAPK signaling [33,34], PI3K-Akt signaling [35], AMPK signaling [36,37], fatty acid biosynthesis, fatty acid metabolism pathways. AMPK represses fatty acid, triglyceride, and cholesterol synthesis through several ways, including through acetyl-CoA carboxylase [36], 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMGCR), and SREBP2 phosphorylation, and represses the proteolytic processing, nuclear translocation, and transcriptional activity of SREBP1 [37]. Interestingly, 5 DEMs and 5 DECs were found to be co-differentially expressed during muscle and adipose tissue development. Although the DECs are novel circRNAs that were not previously identified, very little information or data is available from published databases, it was reported that these DECs are involved in lipid metabolism, such as miR-122 was discovered to be involved in the regulation of cholesterol and fatty acid homeostasis in human and mice [38,39]; miR-499 can negatively regulate the PR domain containing 16 expression and hinder skeletal muscle satellite cells adipogenic differentiation [40]. Combined with the co-differentially expressed DEGs, DELs, DECs, and DEMs, we constructed a ceRNA regulatory network, which showed that 10 DEGs, 5 DECs, and 3 DELs cross-talked with one another through the 5 DEMs. This indicated that IMF deposition in yak results from a balance level of gene expression, and that the Fig. 5 The ceRNA co-regulation network. The ceRNA co-regulation network. Co-differentially expressed mRNAs, miRNAs, lncRNAs, and circRNAs during AA and LD tissues development were used to construct the ceRNA co-regulation network. The bule box, red circle, yellow rhombus, and pink ellipse nodes represent co-differentially expressed mRNAs, miRNAs, lncRNAs, and circRNAs, respectively. The dotted line and the solid line indicate the co-regulation between lncRNAs and mRNAs, and between miRNAs and other transcripts, respectively development of IMF is a complex multi-organ process regulated by the coordinated actions of muscle and adipocyte tissues. Furthermore, from the ceRNA network, we observed three ceRNA subnetworks, which showed that TCONS-00016416 and its target SIRT1 "talked" to each other through the same miR-381-y and miR-208-y response elements, whereas TCONS-00061798 and its target PRKCA, and TCONS-00084092 and its target LPL, "talked" to each other through miR-122-x and miR-499-y response elements, respectively. Therefore, we speculate that these three subnetworks may play a key role in the regulation of IMF deposition. Although we successfully found many DEGs that may be associated with IMF content and constructed a ceRNA regulatory network for the yak, some limitations in this study should be noted. Since the collection of IMF in yak is difficult, we sampled the LD and AA tissues to synthetically analyze the potential IMF-related genes to overcome the limitations of IMF sampling, which may to some extent have affected the results and explanations. Nonetheless, the genes involved in the ceRNA regulatory network may play an important role in IMF deposition through molecular synergism and upregulation of important pathways, these possibilities are worthy of future research efforts. Conclusions The present study provides a comprehensive landscape of the differences in the whole-transcriptome profiles of LD and AA tissues between two developmental stages in yaks. We identified 16 DEGs related to lipid biosynthesis that were co-differentially expressed in the two tissues during development, including ACACB, ACADL, ELOVL7, SIRT1, FASN, LPL, PRKCA, and SCD. Furthermore, we found that several differentially expressed lncRNAs, miRNAs, and circRNAs during muscle and adipose development were closely related to some lipid metabolism pathways, and that the 3 lncRNAs, 5 miRNAs, and 5 circRNAs that were co-differentially expressed in the two tissues may play a crucial role in regulating IMF deposition. On the basis of the codifferently expressed transcripts, we constructed a ceRNA regulatory network which contained 10 DEGs, 5 DEMs, 5 DECs, 3 DELs, and 29 relationships. Within the network, we suspected that the 3 ceRNA subnetworks (i.e., miR-381-y-TCONS-00016416-SIRT1, miR-122-x-TCONS-00061798-PRKCA, and miR-499-y-TCONS-00084092-LPL) may play a crucial role in the regulation of IMF deposition. Our findings have identified potential regulators and molecular regulatory networks that may be involved in IMF contents in yaks, and provide a foundation for future studies on the molecular mechanisms underlying IMF deposition. Animals and sample collection In total, 12 LWQ female yaks that had been raised in grazing systems and under the same conditions of handing and nutrition in natural pasture of Leiwoqi country (Location: Changdu, Tibet, China; geographic coordinates: 96°23′33″E, 31°27′3″N, altitude:4200 m above sea level) were randomly selected at four developmental stages (0.5, 2.5, 4.5, and 7.5 years of age). Each stage comprised three yaks with the similar body weight. Between Oct 21st and 22 nd, 2017, all yaks were stunned with a captive bolt pistol (Cash 8000 Model Stunner, 0.22 calibre, 4.5 grain cartridge) to ameliorate the suffering of the animals prior to their humane killing, following which exsanguination via a transverse incision of the neck was carried out in the slaughterhouse of Zang Jia Mao Niu Co, Ltd. Then, the LD and AA tissues were excised immediately between the 12th and 13th ribs (right half carcass) and rapidly stored in liquid nitrogen until RNA isolation. We also collected one more LD sample for IMF content measurement. All animals used in this study belong to Zang Jia Mao Niu Co, Ltd. Analysis of the intramuscular fat content The IMF content of the 12 LD samples were determined according to the standard Soxhlet extraction method [41,42]. In brief, the LD sample was pre-dried and crushed following weighted an x amount (in grams) into the Soxhlet glass tube, and then transferred to the extraction chamber in the Soxhlet equipment. The sample was soaked overnight in anhydrous ether, following which the anhydrous ether backflow devices were opened for 10 h at 80°C. The residue sample was dried under a fume hood for1 h and then transferred to a forced-air oven at 105°C for 8 h. The dried residue sample was weighted and marked as the y amount (in grams). The IMF content was calculate as follows: IMF(%) = [(x-y)/x] × 100. Total RNA isolation, sequencing and raw data analysis On the basis of the IMF content results, total RNA was isolated from LD and AA samples excised from the 0.5and 2.5-year-old yak with TRIzol reagent (Invitrogen, CA, USA). Then, DNase and an RNeasy Mini Kit (Qiagen, CA, USA) were used to purify the total RNA. NanoDrop 2000 Spectrophotometer (Thermo Fisher Scientific, DE, USA), Bio-Photometer (Eppendorf, Hamburg, Germany) and 1% agarose gel electrophoresis were used to measure the quantity and quality of the extraction total RNA. Furthermore, the RNA Nano 6000 Assay Kit with the Agilent Bioanalyzer 2100 system (Agilent Technologies, CA, USA) were used to assess the RNA integrity. After removed the ribosomal RNA with the Ribo-Zero rRNA kit (Epicentre, WI, USA), the RNA libraries (mRNAs, lncRNAs, and circRNAs) were generated with the mRNA-Seq Sample Preparation Kit (Illumina, CA, USA). The library quality was measured with the Agilent Bioanalyzer 2100 system and then sequenced using the Illumina HiSeq™ 4000 system according to the vendor's recommended protocol. Those containing ploy-N or adapter and low quality reads were removed from the sequenced raw reads, the retained reads were named clean reads. The Tophat2 software with default parameters was used to map the clean reads to the Bos grunniens genome (BosGru v2.0), and the mapped reads of each sample that existed at least one of both replicates were assembled with StringTie software using the default parameter. The annotated and unannotated transcripts were obtained using Cufflinks after reconstruction of the transcripts from our RNA-seq data. The coding potential calculator, Coding-Non-Coding-Index (CNCI, version 2) and Pfam Scan were used with default parameter to predict the annotated transcripts coding potential. Those transcripts that were predicted to have coding potential by two or all of the above three tools named as candidate set of novel protein-coding transcripts, whereas those without coding potential were named as novel lncRNAs. The different types of lncRNAs (inclusive of cis-acting, antisense, and trans-acting) were selected using Cuffcompare. The circRNA Identifier (CIRI) tool was used to identify the circRNAs according to previously studies [43,44]. The DECs were identified using EBSeq. Refer to the standard procedure, miRNA libraries were constructed and then the quality was assessed as above cDNA libraries. The libraries were then sequenced with Illumina HiSeq™ 2500 system according to the vendor's recommended protocols. Clean reads were obtained after the removal of raw reads containing 5′ adaptor, 3′ adaptor, no insertion sequence, and poly(A) in small RNA fragments, as well as those shorter than 18 nt, of known bovine classes of RNAs (ribosomal RNAs, messenger RNAs, small nuclear RNAs, transfer RNAs, small nucleolar RNA, small cytoplasmic RNAs, and repeats), and of low quality. The retained reads were mapped to the miRNAs in the miRBase 22.0 database (http://www. mirbase.org/), the mapped one was named as known miRNA. While, the unmapped one was then aligned to the yak genome, and the mirdeep2 algorithm was used to predict novel miRNA. Three softwares (mireap, mi-Randa and TargetScan) with default parameters were used to predict miRNAs and circRNAs targets. All transcripts expression level was calculated using the Stringtie and Ballgown tools, and normalized using FPKM with the RSEM software. The false discovery rate (FDR) and FC were analyzed using the edgeR package, only the transcripts with a FC of ≥2 and FDR < 0.05 were then assigned as differentially expressed transcripts. Gene ontology enrichment and KEGG pathway analyses GO annotation and KEGG enrichment analyses were conducted to annotate the potential function of the genes. GO enrichment analysis was carried out with the GOseq R package, and the GO terms of the DEGs were assessed using Fisher's exact test. The DAVID web server annotation tool (version 6.8, http://david.ncifcrf.gov/) was used to map the enriched pathways from the KEGG database. Only the p < 0.05 was considered statistically significant and listed. Prediction of lncRNA and miRNA targets and construction of the ceRNA network Previous studies hypothesized that mRNAs, lncRNA, and transcribed pseudogenes "talk" to one another through miRNA response elements [22], which form a large number of complex regulatory networks and play an important role in various biological processes. On the basis of this theory, we constructed a co-expression ceRNA network related to the regulation of IMF deposition in the yak. The detail methods are as given below: Three strategies: cis-acting, antisense, and trans-acting regulation prediction, were used to predict the targets of lncRNAs. For the cis-acting prediction, the locations of the paired lncRNAs and mRNAs in the genomic of yak were calculated, and the genes within 10 kb of the lncRNAs were named as cis-acting regulatory targets. For antisense regulation prediction, the RNAplex software were used to screen targets by comparing complementary bases between the lncRNAs and mRNAs. For trans-acting regulatory targets, the expression of the lncRNA was determined to be not related with the location of the mRNA but co-expressed with it. To predict the lncRNA targets, the free energy between them was calculate using LncTar. The co-expression of DECs and DEMs were measured with pearson's correlation coefficient method, where the absolute coefficient value that greater than 0.8 was considered relevant for the network construction, with p < 0.05 regarded as being statistically significant. Based on the correlation analysis between DECs and DEMs, The co-expression network of circRNA-miRNA was constructed. Upon these analyses above, we selected the codifferentially expressed DEGs, DELs, DECs, and DEMs involved in these co-expression networks to construct the co-expression ceRNA regulatory network related to IMF-deposition. RNA sequencing results validation using RT-qPCR To evaluate the reliability of the transcript expression data obtained by RNA-Seq. RT-qPCR were carried out with the LD and AA tissues. Using the PrimeScript™ RT reagent Kit (Takara, Dalian, China), total RNA was reverse transcribed for mRNA and miRNA detection. Using M-MLV reverse transcriptase kit (Takara, Japan) and random primers, total RNA was reverse transcribed for lncRNA evaluation. To verify the back-splicing junction of circRNA, the RNase R-and control reaction (without RNase R) were prepared and reverse transcribed into cDNA to amplify each circRNA with primers following previously method [45,46], then the products were sequenced to find the back-splicing sites. To experimentally evaluate the expression of circRNAs in LD and AA tissues, RNase R-treated cDNA were used as PCR templates. All qPCR experiments were conducted using the SYBR Premix Ex Taq kit (Takara, Dalian, China). The GAPDH, β-actin, and U6 small nuclear RNA genes were selected as the endogenous control genes (all primers are shown in Table S9). All qPCR validations were carried out with three biological replicates and triplicate reactions for each sample. After amplification, the products were confirmed by agarose gel electrophoresis and Sanger sequencing, the relative transcript abundance was calculated using 2 -ΔΔCt method.
2023-01-15T14:28:58.701Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "1699f9551c715bab701777784b4ecc52f7f0c9be", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12864-020-6757-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "1699f9551c715bab701777784b4ecc52f7f0c9be", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
234514135
pes2o/s2orc
v3-fos-license
Perceived value of ride-hailing providers: The role of algorithmic management, customer dysfunctional behavior and perceived injustice Despite providing service and consumption are two sides of the same coin of value co-creation in the gig economy, value as an outcome was only investigated from the customer point of view, not from the provider. This study aims to explore the impact of algorithmic management, customer dysfunctional behavior and perceived injustice on Uber and Careem drivers perceived value in Egypt. Qualitative interviews and content analysis were employed. Thematic analysis will be used for identifying, analyzing, and reporting patterns within data. Our findings define how drivers’ perceived value is negatively influenced by algorithmic management, customer dysfunctional behavior, and perceived injustice. In order to increase drivers’ perceived value, ride-hailing companies should not only put consideration on how to improve the control of algorithmic management and customer empowerment but also have to revise their policies and decisions to provide positive value to their drivers. © 2020 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Introduction During the last decade, gig economy has become a universal and progressively important phenomenon (Bardhi & Eckhardt, 2012). It has risen a culture of "what's mine is your" (Botsman & Rogers, 2010), which attracted so many consumers due to convenience and reduced cost comparing with traditional ownership model (Camilleri & Neuhofer, 2017;Puschmann & Alt, 2016). For labor market, this new form of capitalism attracted great number of workers to engage in several forms of flexible work (De Stefano, 2016;Lee et al., 2015;Wu et al., 2019). Where digital platforms replaced the fixed employer-employee relationship with new structure, in which platforms play as "shadow employer" by mediating the process of supply and demand between workers and customers (Friedman, 2014;Gandini, 2019;Veen et al., 2020). They promise to reveal a society where no matter what you need it is available in low costs, with more choices for users and flexibility for workers, and service providers and receivers live more spontaneously and liberated experience from bureaucracy and middle-men (Cheng & Foley, 2019;De Stefano, 2016;Flanagan, 2019). Musson et al., 2020;Wood et al., 2018). Furthermore, various means such as feedback and rating provided by customers as well as ranking systems are used to have constant monitor over drivers' performance (De Stefano, 2016;Gandini, 2019). Although labor process theory sustains great effort in explaining labor control in gig economy (Gandini, 2019;Wu et al., 2019), it still unclear how digital labor are affected by such means of control (Beverungen et al., 2015;Cheng & Foley, 2019;Lee et al., 2015;Veen et al., 2020). Moreover, there is a massive movement toward putting customers to work and using them as source of competence (Zwick et al., 2008). As customers act proactive role in value creation, owning unequal power would encourage some of them to adopt dysfunctional behavior to gain more desirable outcomes (Auh et al., 2019;Kang & Gong, 2019). Yet, the current understanding of customer participation is focusing on customer benefits and its outcome to firms (Auh et al., 2019;Chan et al., 2010;Dong & Sivakumar, 2017). Hence, there is a need to recognize the effect of customer participation on the perceived value of service providers (Mustak, 2019). There are two dominant ride-hailing companies Uber and Careem in Egypt. By 2018, Uber acquire the first ranking of the Egyptian ride-hailing market, serving about 4 million passengers through 150000 drivers. Recently, by January 2020, 3.1 billion USD acquisition contract between Uber and Careem has occurred, under condition that both Uber and Careem are operating as separated companies. On one hand, Uber's General Manager in Egypt suggests that this acquisition is a very good step for the company in order to be able to totally control the market. On the other hand, drivers perceive this acquisition is only for Uber's favor. For instance, drivers' problems after this acquisition transformed from bad to worse. Drivers are surprised that the two apps became connected to each other. Thus, drivers whose accounts on Uber app were blocked for a certain reason, were shocked that their Careem accounts were also deactivated for no reason. Moreover, ride-hailing market in Egypt transformed from oligopoly to monopoly by Uber which gave it more power to change its policies without any fear from competition. Thus, this research contributes to the existing literature in several ways. First, it extends the existing literature of value co-creation by applying service-dominate logic. Sharing economy platforms are increasingly utilizing algorithms to manage and coordinate extremely large amounts of data on both workers and customers (Yu et al., 2017); but how workers are being impacted by these relatively new algorithmic management practices remains unclear (Cheng & Foley, 2019). Value as an outcome of co-creation of gig economy was only investigated from the customer point view not from the provider's (Zhang et al., 2018). Second, to best of my knowledge, it is the first study to investigate the role of customer dysfunctional behavior on services provider's outcomes in gig economy issue by applying SDL. Applying this approach is vital, in particular in the ride sharing because its main activity is providing a service. Which characterized by high-contact and people-processing services. In this respect, it is vital to realize the role and the relative importance of the various key factors related to service in ride sharing. Third, it contributes to labor process theory by investigating the effect of algorithmic management and perceived justice on gaming and resisting behavior. Since there are concerns about the role of platforms in creating, and shaping, relations of co-creation (Gandini, 2019). Moreover, there is little doubt that algorithmic management is providing useful and effective tools for the operation of large sharing economy platforms (Cheng & Foley, 2019). Finally, this study will investigate the role of algorithmic management in new context. It will be applied in one of the developing countries (Egypt). It is noticed that most studies of gig economy have been conducted level in developed countries. Literature review Due to Marxist classic premises which are the main principles of the core theory of labor process (LPT), the process of creating value is totally dependent on labor shoulders. Moreover, effective labor control leads to extraction of surplus value (Beverungen et al., 2015;Wu et al., 2019). LPT plays a great role in assessing labor-capital relationship specially when it comes about control and exploitation (Jaros, 2005;Veen et al., 2020). It argues that mangers have to keep control over workers because they may criticisms about the procedures, amount of required effort and meaning of tasks (Elliott & Long, 2016). Although LPT provides practical general outline for understanding of work in capitalist institutions (Thompson, 1990), activation of workers efforts in labor process still a main challenge (Veen et al., 2020). Creation of new marketplace access mechanisms moving beyond traditional forms of access are carried out through relocation markets, social networks or peer-to-peer matching platforms (Uber, Airbnb) providing products, services and skills that need to be shared (Bardhi & Eckhardt, 2012). Gig work has created a lot of hype and has been presented as the next stage of capitalist development (Goods et al., 2019). There are many different terms that describe such type of work 'sharing economy', 'gig economy', 'crowdsourcing' or the 'collaborative economy' which makes defining it very challenging (Goods et al., 2019;Stewart & Stanford, 2017). The term 'gig economy' recognizes the establishment of a capital-labor relationship between a worker and a digital platform, that mediates workers' supply and consumer demand for the completion of a small task and operates at once as a market intermediary (Cheng & Foley, 2019;Friedman, 2014). Furthermore, it denotes the ''collaborative consumption" made by the activities of sharing, exchanging, and rental of resources without owning goods (Puschmann & Alt, 2016). By invoking new capitalist technologies that provide infrastructure for exchanging, interacting, communicating, and participating in the network (Ganapati & Reddick, 2018), the role of traditional control tools (e.g. Different human resource management control 55 strategies) became very limited (Jabagi et al., 2020;Wu et al., 2019), Which led to utilizing different regimes of control over labor process (Gandini, 2019). Additionally, employers are converting ways of control from direct to hybridity of control (Veen et al., 2020). Where capital preserve overall managerial prerogative by applying interlinking, complementary and merged ways of control (Callaghan & Thompson, 2001;Thompson & van den Broek, 2010). For example, hybridized control depends on software programs rely upon big data that monitor, evaluate work and stimulate worker cooperation (Elliott & Long, 2016;Thompson & van den Broek, 2010;Veen et al., 2020). It has resulted in a shift of managerial responsibilities from humans to forms of 'algorithmic management (Cheng & Foley, 2019). According to rising body of literature which discuss algorithmic management (Lee et al., 2015;Möhlmann & Zalmanson, 2018;Rosenblat & Stark, 2016), it is a self-learning systems , handles responsibility of managing virtual workers , execution of decisions and optimizing performance instead of human managers (Jabagi et al., 2020;Jarrahi & Sutherland, 2019;Schildt, 2017;Veen et al., 2020). Ride sharing algorithmic platforms are built on a continuous stream of information concerning about workers' behavior in any given situation, as well as automatic implementation of algorithmic decisions (Rosenblat & Stark, 2016) . Hence, work under control of algorithmic management has two main features. First, rating systems and acceptance rates that track driver performance, are ultimately used as tools to assess driver's performance (De Stefano, 2016;Gandini, 2019). Second, in algorithmic management processes, drivers interact with a system has less transparency, with few awareness about the set of rules governing the platform (Möhlmann & Zalmanson, 2018). Nevertheless, algorithmic management allows ridesharing companies to control over the work processes effectively (Cheng & Foley, 2019), Algorithmic management has some attributes that may influence negatively over drivers (Jarrahi & Sutherland, 2019;Rosenblat & Stark, 2016). For example, high acceptance rate means for any service provider more gigs (Griesbach et al., 2019), Hence, low acceptance rate not only rides shortage but also accounts deactivation , which places huge pressure on drivers (Lee et al., 2015). Lack of transparency about set of explicit rules governing algorithm parameters not only influences drivers' economical outcome but also, affects their feeling and behaviors (Möhlmann & Zalmanson, 2018). For instance, algorithm matrix implements automatic decisionssuch as penalty or account block -without investigating drivers or detailed explanation, that foster negative psychological emotions (Lee et al., 2015). Some of these control practices necessitate little human involvement (Cheng & Foley, 2019;Reid-Musson et al., 2020). For instance, Uber manages its drivers through both algorithm and 'peer pressure' by transforming it into monitoring instrument over driver's performance (Gandini, 2019;Lee et al., 2015;Rosenblat & Stark, 2016). Gig platform moved labor processes outside factories walls and spread it into the entire society (Beverungen et al., 2015) . Furthermore, it fences of the intervened social relations and transformed them into production relations (Adler, 2007;Gandini, 2019). Also, government became the latest strategies in customer management (Zwick et al., 2008). These strategies aim at using consumers as partners (Cova et al., 2011). Whereby, shifting direct control to unobtrusive where customers-led practices play massive role in maintaining the growth of service economy (Fuller & Smith, 1991). Additionally, the socialization of capitalist relations converted customers to 'collective worker' (Adler, 2007), where customers feedback used as monitoring, evaluating and discipline tool to service workers' performance (Fuller & Smith, 1991). For instance, Uber and Careem empowered their customers to perform as middle managers who use their rating as evaluating tool to determine drivers' employment eligibility (Rosenblat & Stark, 2016). However, the new forms of social relations production seems highly productive (Beverungen et al., 2015), passengers may misuse their power and think they have the authority to seek for excessive demands which might be burden over drivers (Kang & Gong, 2019;Reid-Musson et al., 2020). Thus, balance in using customers control is very critical to sustain generating surplus value from those productive social relations (Zwick et al., 2008). Creating value needs balance in control process and components, this balance is the degree to which individuals are treated justly and whether the outcomes gained and the processes carried out are fair or not (Sulu et al., 2010). With increasing globalization and international competition, human resources became the most strategic asset for every organization. Thus, organizations are more concerned with employees' perceptions on justice because of its influence on employees' attitudes and behaviors (Thomas & Nagalingappa, 2012). Organizational justice literature discriminated between several sub-dimensions of justice as distributive justice which addresses the reward system, procedural justice which involves the organization's decision making procedures, interactional (interpersonal) justice show that people also react to their perceptions regarding the social sensitivity of interpersonal treatment received at workplace and informational justice which is provided information about work (Leow & Khong, 2009;Paré & Tremblay, 2007;Sulu et al., 2010). Literature on perceived justice proposes that employees implement beneficial outcomes to their organization when it treats them fairly (Ibrahim & Perez, 2014). Moreover, employees who perceive higher levels of informational, procedural and interpersonal justice rxperience high perceived value (Fischer, 2013;Georgalis et al., 2015;Tenhiälä et al., 2013). For instance, employess may feel insomnia, uncertainty and self-isolated when they lose control over factors and outcomes of decisions are made by their organizations (Lind & van den Boos, 2002). As, rewards distribution was not always as significant as the process of distribution decision allocated (Cohen-Charash & Spector, 2001). Moreover, suffering from mistreatment and unrespect, lack of transparency, intimidation and absence of accessibility, led to symbolic value destruction (Greenberg, 2006). Thus, this study suggests that organizational injustice has negative effect on ride-hailing drivers' perceived value. The impact of algorithmic management and customer's empowerment on human workers To explore the impact of algorithmic management and customer's empowerment on human workers. With the purpose of solicit a suitable amount of informant, we used both interviews and content analysis. Since, drivers have little direct contact with company representatives, but can interact with each other freely through online forums to gain social knowledge of the rideshare systems which is a good opportunity to explore the impact of algorithmic management on human workers. Interviews provide good opportunity to improve our understanding about algorithm management and the customer empowerment as well as to have deep knowledge about perceived value dimensions. Besides interviews, content analysis was conducted on the largest Facebook Uber and Careem pages and groups in Egypt over the period February 2019 -September 2020. Seventeen interviews were conducted over the period July-December 2019 with drivers who use Uber and Careem. Drivers were recruited by: advertising in Facebook drivers' groups and through actual trips. Conducting interviews was very challenging since drivers were afraid that the interviewers were spied for raidhailing companies to kwon who is against them, then fire him. Thus, interviewers avoided to ask direct questions about drivers' demographic information. Furthermore, the interview process was undertaken until it reached saturation (Creswell, 2016). This study was carried out due to academic ethical protocol. First we conducted 4 unstructured interviews without following certain rules to capture as much as possible data about drivers' job and how working as a captain is valuable (Page et al., 2018) . Then semistructure interviews were designed based on analyzing the pilot interviews. Appendix 1 shows all detail about participants. This study depended on 'back and forth' between literature and fieldwork to understand the phenomena (Kaplan & Orlikowski, 2013). Thematic analysis was used to identify, analysis and report themes within data. It is built on six phases of analysis; first, it is getting ourselves familiarized with the data by writing verbal interviews and read all written ones more than once, second, creating initial list of codes, third, categorization different codes to search for potential themes, fourth, refinement of themes to ensure that data within each theme is coherent, fifth, defining the core of each theme and what is it about, and also formulate sub-themes, and finally, Writing the final report (Braun & Clarke, 2006). Findings According to Smith & Colgate (2007), there are four main categories of value: functional/ instrumental, experiential/hedonic, symbolic, and cost/ sacrifice value. Functional value is heedful to what extent is a service or product performing appropriately, having accurate functions and delivering expected outcomes (Woodruff, 1997). While, experiential/hedonic value is related to the degree of experience, emotions, or feelings that a service/product can create (Woodall, 2003). Moreover, symbolic value reflects the degree of psychological meaning attachment to a service/product. Whereas, cost/sacrifice value is linked to costs of value creation transactions (Smith & Colgate, 2007). This study will focus on functional, experiential/hedonic, and cost/sacrifice because interviewees did not mention anything related to symbolic value. Functional value Ride-hailing companies tend to hide customer pick up location which appear when drive accept the trip. Also, driver only knows the destination when he/she reaches pickup point. Drivers claim that hiding theses information is not practical. For instance, some passenger may request rides for unsafe destinations. P6 " someday I got a trip when I reached the pickup point, I discovered that the passenger destination is a place with bad reputation. So, I apologized to the customer then I canceled the trips." Whereas, both companies use acceptance rate as a tool to evaluate drivers also to control supply follow. A driver would accept a trip that may harm him because he does not want to harm his acceptance rate to avoid penalization. As result, drivers think that acceptance rate is not an efficient way to evaluate captains. Moreover, this tool should be more flexible according to rides and situations because some trips are not safe as well as some pickup points and destination are with non-phased roads. P9 " I have certain percentage of cancelation if I reached this percentage the company closes my account. I have to accept any orders to increase my acceptance rate. The company have to either give me the right to choose to go or not because sometimes I get trips to jam and unpaved areas which causes damages to the car nor increase the fare for such places" P11 " I refuse a request if there are requests in places that are unsafe, and places that one may be afraid to enter because any problem may occur." Although Uber and Careem are improving their algorithmic management continually, many drivers reported that they face technical problems with the application such as hanging in the system. Moreover, drivers reported that they got problems with GPS. P4 "sometimes the app hangs." P6 " GPS is wrong and makes me enter wrong and closed roads, it is very old, is not constantly updated and here I speak about Uber's map itself and not Google Map." Furthermore, they claimed that occasionally the system miscalculate their trips fee. P16 " The company system calculates the trip price according to its interest, not according to specified pricing policy." Facebook post:" It's not fair, how come a 22 km costs 13 EGP " A: " the same with me, it counted 4.46 EGP for more than 10 KM trip". B: "It happened with me and the company didn't give me anything" Moreover, when drivers face problem unfortunately, they cannot find appropriate support because they are mostly referred to automatic systems. P7 "If we send a complaint, the response is handled by someone who does not understand the importance of the situation and can't tell the captain how to act properly. Sometimes the system sends pre-activated responses, for example: an automated answering system please go to the branch of the company." Facebook post" What shall I do the app is not working since couple days, and I called the company: they told me to send a message; I sent but I didn't receive any replay" A: don't worry I happens with me every year for four days at the same time then it opens by itself" Uber and Careem claim that the assignment process depends on customer get trips to the nearest driver but the reality is not totally clear how rides are allocated. P8 " Sometimes, determine the destination itself forces me to cancel trips, it is not logical that I am in downtown and my destination is east "Madinaty" in Suez road then the program send me a request from a client wants to go to October 6 ate the west side from downtown." Many drivers complained from Uber automatic implementation of decisions due to unexplained penalties. However, drivers did not violate the company's policies or needs, sometimes they get warnings, suspension or even block without any explanation. P17 " The system issues random blocks to anyone without any reason and says fake trips-This does not happen-and the captain may have a trip and after he finish it, the system gave him a final block without a warning." Captains complained from using only divers' rate as a way to evaluate his attitude and behavior since some customer misuse this tool for their favors. P11 " Customer pretends that the captain talked with him in an improper manner, his car is not clean or he didn't follow the GPShowever their GPS may lead me to longer routes would take more time to reach the customers-to get free rides from the company." Moreover, some passenger are illiterate about rating system, some of them would think that one star is the highest rate which harms the driver's full rate. Facebook post" I don't know what I should do, a customer after the end of ride smiled to me and told me her is full rate then he pressed on one star and submit it, he thought that he is giving me the highest rate, should I call the company and explain what happened" A: "don't buzzer yourself the company will not listen to you" B: " It happened once with me and the customer send email to change the rate but nothing happened" Economic value As, every driver has limited number of cancelation opportunities to unwanted trips, drivers with low acceptance rate have to accept trips that costs may surpass the net income from these trips. For instance, the distance the drivers take to reach customer is not counted, while some times the system may assign 10 km far away from the captain. P3 "sometimes I get trips from fare places from me, so I would consume more unpaid gas". Also, some drivers claim that working for ride-hailing companies is not rewarding due to the growing company's commission plus the cost of fuel and maintenance. 58 P16 " Unfortunately, the income is not good but I still working for Uber because of the economic conditions. The system mistake in calculating the journey fee and the expensive cost of licensing besides maintenance costs, and the difficulties we met in the street. In addition to weak bonus it became 50 pounds for every 20 trips, which means 2.5 for a trip, as well as what we can even achieve it." FG post: " is it normal that all the money I gain today from rides is totally spent on fuel?" A: this job does not worth any more. B: if your car depends on natural gas you will not lose. Furthermore, both Uber and Careem changed the counting policy from determining the trip cost based on actual time and distance taken during the trips to a fixed predetermined fee before the starting the trip, which may eventually lead to economic losses to the captain. P14 "When a customer requests a trip, the system determines the price based on a specific path. But during the trip, the customer may find that this road is crowded and decides to take another path -it may be longer than the path specified on the price in advance -and as a result of that the driver is the one who bears the cost of the additional distance and not the customer. Moreover, Uber and Careem had raised its percentage aligning with the fuel cost increase which affected the driver's income gravely. P10 " the first year and a half of the income was very good and I was able to do anything, after all this date. The percentage is very high and anything is needed for the car is expensive, if I worked 500 pounds a day, after excluding about 160 gasoline and 40 pounds expenses all day, I will go home with 300 pounds this is with excluding the second day's gasoline." Furthermore, some customers misuse fixed predetermined fee policy to save money on drivers' charge, which decreases drivers' profits. FG post: " the customer defined close destination on the application then he asked me to go further than this destination, the trips fee is calculated on the determined one, the customer refused to change his destination and told me he will make a report" A: " unfortunately, the company gives customers more than their right and now they treated us as their servants". Experiential/Hedonic value Drivers feel unsecured due to lack of information about their customers, especially after many criminal and theft cases conducted by customers. P11 "I don't kwon to whom I am going to until I reached the customer. As you know maybe the customer will be a murderer as what happen to our college "Hany". So, we feel that we are threatened all the time while there is nothing in the App offer us security." P6 " am not happy or satisfied, especially after Hani Shaker accident, I always in anxiety and anticipation" Drivers chose to work under unsafe and beneficial circumstances to fulfil their economic responsibility towards their families or to face unemployment. P10 " I am not happy or satisfied, I am working for now, because there is nothing else to do" Customers' evaluation is a very critical tool in determine captain status. One complain may change a driver's account from active to block without any investigation. Thus, drivers always feel anxiety and under never ending pressure. P9 " the biggest problem we face is that the company takes the customer's side more than captain high causes high mental pressure on us and makes us feel injustice." Moreover, customers may give driver low rate and the driver would not understand why her/she got this low rate. Driver just receives report for this rating, but does not know who give him/her low rate and in which trip, and therefore, he/she does not know the cause of problem. P16 " The Company listens only to customers complains, it may block captain's account without any justice and making investigation with him. Capitan support service is not fear at all. Customer is over anything." Due to working in isolation from the company and colleagues, the only interaction drivers get is with customers. On the other hand, Uber has a non-communication policy; drivers are not allowed to have any kind of unofficial interaction with customers. P6 " I benefit from dealing with people, I interact with different social and scientific classes. But I do not try to engage in social networking and relations with passengers." Since customer is the one who is responsible for driver rate, drivers do have to learn how to deal with different types of people. Drivers may not have the right to start a talk with a customer but if the customer starts a talk, the driver has the right to choose to go on or to end this talk. P9 " working as a captain added to more experience but human relation no. experience in acting with customers and roads, etc." Sometimes these interactions between drivers and customers are fruitfully ending and some is not but at the end it adds to adds to captains' ability in dealing with others. P13 " I interacted with people from different backgrounds, cultures, religious and countries, it made me know more people." Discussion Due to SDL, value co-creation takes place between at least two actors who integrate resources (Vargo & Lusch, 2008). It indicates that customers are always co-creators of value (Vargo & Lusch, 2004). Value-in-use is considered a central concept for the SDL. It has a vital bias in all marketing exchange activities since all parties engage to gain value (Ulaga, 2003). It is noticed that SDL literature is concerned with creating values from customer perspective. Moreover, previous research did not investigate it from the point of view of (service provider) who applies specialized competences through coproduced with customers (co-creation) to intangible and tangible value. Simply, value in a marketing exchange is generally defined as the trade-off between the benefits ''what is received'' and the sacrifices ''what is given'' (Ulaga, 2003). Accordingly, ride-hailing service providers not only sacrifice by having stable employment relationship (Friedman, 2014;Scheiber, 2017), but also they should integrate their feelings and personalities into the work process to achieve customer satisfaction, and in turn they get flexible employment arrangements (Leidner, 1999;Wu et al., 2019). The first attribute of algorithmic management that influence driver's perceived value is lack of transparency about how algorithmic make assignment, generating undesirable feelings toward ride-hailing companies (Lee et al., 2015). This feature is allocated to constrain drivers' ability to misbehavior (Veen et al., 2020), however drivers have found ways to game and resist the system (Möhlmann & Zalmanson, 2018). Furthermore, lack of transparency made working for Uber and Careem more risky since service provider has to pick up different unknown passengers (Reid-Musson et al., 2020). Both companies do not inform the captain about any trip details -pickup place and destinationbefore they accept it first. Even though promises of autonomy and flexibility are linked to gig economy, non-transparency of system makes drivers experience loss of freedom and consequently they feel that the company is treating them unfairly by not providing them relevant and full information (Möhlmann & Zalmanson, 2018;Reid-Musson et al., 2020;Wu et al., 2019). Additionally, both companies offer every captain limited number of trips' rejection, exceeding that number means harming the drivers' acceptance rate. For instance, drivers are expected to accept at least 80% of ride requests; low acceptance rate can result in account deactivation (Lee et al., 2015;Page et al., 2018;Ravenelle, 2017). Whereas, algorithm forces drivers to accept most of passengers requests, acceptance do not always mean gaining beneficial profit (Möhlmann & Zalmanson, 2018). For example, a driver may receive trip that takesnot charged -15 KM to reach the customer pickup point which leads to spending more expenses for this driver. In addition, platforms employers shifted the economic risk of fluctuation by changing drivers' wages according to demand conditions (Friedman, 2014). Moreover, technique issues like app hanging, not updated GPS and miscalculating of rides fee is not only affecting drivers perceived functional value but also it may lead to customer dysfunctional behavior. It is noticed that although there is a growing attention to labor control in capitalist digital platforms, there is limited attention applying labor process theory in gig economy (Gandini, 2019;Wu et al., 2019). Embedding forms of emotional labor at the core of working process as customer feedback and rating system regulates the interactional relationship between the two parties (Gandini, 2019). Drivers for ride-hailing companies perform as emotional labor who present placating manner to passengers ; regardless of passengers' demeanor ; to get high rating (Rosenblat & Stark, 2016). According to emotional labor theory (Hochschild, 2012), customer participation could be used as an effective control tool (Millar, 2008;Wu et al., 2019). Rating system is a critical labor control regime that empowers customers to perform as middle managers over drivers (Rosenblat & Stark, 2016). Moreover, passenger's rating has circular role on drivers' employability (Gandini, 2019;Rosenblat et al., 2017). Driver who has a rate less than 4.6 experience continuous feeling of fear to be fired through deactivating his/her own account. Thus, the rating ratio is a serious matter for drivers. Customer rating system is a double-edged sword. On one hand, it is an effective tool to maintain high quality and creating economic beneficial. On the other hand, ensuring that customer understand their role in the service and do not misuse their freedom is very challenging. Unfortunately, some customers are not generally aware of rating system (Rosenblat et al., 2017). For instance, some customer presume that one star is the highest rank while five stars is the lowest one. Also, drivers may receive low rate due to different stuffs out of their control, including customer misplacing of their pickup location on GPS, fluctuation in trips pricing and holding time until driver reach pickup point. Getting low rate due to one of the former reasons not only impact on drivers' employability but 60 also on their perceived value. For example, when drivers receive a low rate for something out of their control, this would make them have negative psychological feelings which affects their perceived hedonic value (Lee et al., 2015). Moreover, drivers would see that customer rating is not effective tool for evaluating their performance during trips. According to Echeverri & Skålén (2011), creating positive value in practice is moderately unrealistic, since it is not guaranteed that both peers would cooperate to create value, one party or both would obtain dysfunctional behavior deliberately or involuntarily , which leads to "value co-destruction" (Camilleri & Neuhofer, 2017). Simultaneously, drivers are working hard to develop their reputation on the platform through rating system (Rosenblat et al., 2017), some customers intentionally violate the commonly accepted norms of treating drivers or use their rating as a compromising tool to gain extra desired outcomes (Kang & Gong, 2019). For instance, some customers dishonestly report service failures to get discounts or even free ride as service recovery. As a result, driver is penalized and his/her chances to get high paid rides are declined. Additionally, drivers feel that passengers are taking advantage of them which drives them to feel anger, depressed and disappointed. Some passengers forget that rating system is a tool to get feedback for improving the service. They misuse their power and mistreat drivers such as take to them in aggressive way, rude, or abuse them verbally (Chan et al., 2010;Kang & Gong, 2019;Wang et al., 2011), diminishing drivers' self-esteem and increasing their job stress. Moreover, they feel that this tool of control gave customers to humiliate them as well as they agreed that both companies are continuously take the customers side over their dignity. Dormann & Zapf, (2004) argue that service providers get insulted by their customers, experience negative emotions towards their job and themselves. Moreover, workers under control of algorithmic management suffer from social isolation (Wood et al., 2019). Although, ride hailing drivers are emotional labor who works with people, they have direct orders from their companies that they have to keep during the trips. Furthermore, drives are not supposed to have any kind of social relationships with their passenger, as some passengers misunderstand them and make a report, that why some drivers prefer to keep silence even the customer was friendly. Finally, injustice is the keyword for value co-destruction. Injustice refers to an employee's belief that he or she has been treated unfairly (Ambrose et al., 2002). When employees experienced an injustice related to the distribution of resources, firstly they examine whether this allocation decision is fair or not. If this process is unfair, they may show negative reactions. As a result, distributive injustice is not the most effective form of injustice causing powerlessness, and isolation feelings. Due to Bilal et al. (2017) employees in public sector organizations who are suffering from injustice greatly on the basis of gender, nepotism, race, favoritism, excessive influenced from government and union, which causes stress, anxiety, uncertainty and disgrace feelings. All these ultimately affect employees as well as organizational performance. The regulation of the acceptance rate and the driver-passenger rating system offered many benefits to overall service functioning. However, these numeric systems that made drivers accountable for all interactions were sometimes seen as unfair and ineffective and created negative psychological feelings in drivers (Lee et al., 2015). In other cases, drivers perceive that ride-hailing companies favor the passenger in settlements, and also report that platforms' polices are made for the benefit of both companies and customers. For example, deriver's rate plays a great role in determining employability status for captains, however low customer rate would never lead to deactivate his/her own account. Moreover, drivers assume that passengers keep misusing their role in value creation because they do not receive any penalizations for any committed violation. Thus, organizational injustice links between customers' dysfunctional behaviors and value co-destruction. Also, it creates negative feeling towards platforms, passengers and drivers themselves. Sulu et al. (2010) argue that organizational environment in which each employee is not equally cause to the feeling of social isolation. In addition, an employee whose concerns, views, needs, and opinions are not considered in a decision making process feels isolated. Additionally, it increases economical loses for platforms themselves. For instance, organizational injustice generates counterproductive behavior such as gaming and resisting that drivers to use them as a tool to gain more economical benefits. Conclusions Digital platform, holds the promise of a more responsive world to human needs. Ride-hailing companies mainly focus of creating competitive value for their customers by maintaining high degree of control over drivers. However, using algorithm management and customer empowerment seems effective, it has ugly side which impact negatively on service providers perceived justice and value. Working with a system that has so many technical issues, lack of transparency and using acceptance rate and drivers rating as evaluation tool, is not only very frustrating but also may lead to perceived organizational injustice, economic loses and hedonic value destruction. Moreover, drivers feel that ride-hailing companies are not justice and responsible for customers' dysfunctional behavior because they give customers great control power in their hands would give the opportunity to so of them to misuse this power for their favor. Also, customers' dysfunctional behavior plays great role in destruction of drivers' perceived value. Hence, Ride-hailing companies should adopt strategies that balance between the desired control results and drivers' needs and value to keep providing high quality service to their customers. This study acknowledge some limitation, which may inspire researcher in the future. First, data was collected for this study from only Uber and Careem drivers in Egypt. Further research could investigate the influence of algorithmic control and customer empowerment on service providers in different cultures and also on additional types of gig economy. Moreover, this study depended on qualitative research interviews and content analysis for in-depth understanding of the studied issue. Hence, quantitative research in the future is important to validate the influence of control regimes in gig economy on service provider's perceived value. Furthermore, value co-destruction in gig economy and how does it impact on service providers' behaviors should be investigated.
2020-12-17T09:05:36.083Z
2020-12-12T00:00:00.000
{ "year": 2020, "sha1": "e3afd01002c00be160681d257e0a77f377ccbb32", "oa_license": "CCBYNC", "oa_url": "http://ssbfnet.com/ojs/index.php/ijrbs/article/download/960/749", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "932647326b88c824c597859ad0d9f571d2e7076b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
222092398
pes2o/s2orc
v3-fos-license
Endoscopic Versus Microscopic Cartilage Myringoplasty in Chronic Otitis Media Introduction: Operations on the tympanic membrane of the middle ear, myringoplasty, and tympanoplasty are now widely accepted, and attempts are underway all over the world to standardize the surgical techniques. This study aimed to compare postoperative outcomes of endoscopic and microscopic cartilage myringoplasty in patients suffering from chronic otitis media (COM). Materials and Methods: This clinical trial study compared 130 patients with COM who underwent transcanal endoscopic myringoplasty by repairing perforation using auricular concha cartilage under general anesthesia (n=75) and conventional repairing method by postauricular incision and tympanomeatal flap elevation under microscopic surgery (n=55). Results: According to the results, there was no significant difference between the two groups in terms of hearing gain 1, 6, and 12 months after surgery (P=0.063); however, higher hearing gain scores were observed in the endoscopic group. Moreover, lower recovery time and post-operative pain were reported in patients who underwent the endoscopic approach, compared to those who treated with the conventional repairing method (P<0.001). Conclusion: Endoscopic myringoplasty technique is a safe and effective way to improve hearing loss as much as the conventional method. However, due to the lower recovery time and post-operative pain, it seems to be the method of choice in myringoplasty surgery. Introduction Chronic otitis media (COM) is a complex inflammatory and infective disorder that results in many healthcare visits across the world. This phenomenon is the main reason for hearing loss in different age groups (1). Middle ear inflammation with tympanic membrane perforation is the main feature of COM. The most common manifestations of COM include hearing loss, persistent otorrhea, tinnitus, and otalgia (2). Etiologically, COM has multifactorial origins, including immune response to microbial species, anatomical abnormal variations, and even genetic susceptibility (3,4). All of these factors lead to the inhibition of preventive pathways against the healing of the perforated tympanic membrane (5). Due to the wide spectrum of etiological factors and pathophysiological fundaments, the treatment approaches include a vast range of antimicrobial therapies to surgical interventions. Considering the role of bacterial infections in the creation and progression of COM, it has been shown that the isolation of both aerobic and anaerobic bacteria is expected in about 90% to 100% of the affected patients. Therefore, combination antibiotic therapy is the mainstay in the treatment of COM (6). However, the majority of patients with COM need surgical intervention to achieve the best outcome. Initially, microscopic tympanoplasty was introduced as a gold treatment approach in patients with COM. However, the visibility of different components of the middle ear may be potentially limited by employing this technique (7,8). Recently, the progression of an endoscopic, diagnostic, and therapeutic approach has received special attention as a new technique for the management of COM. In fact, the endoscopic approach can facilitate evaluation and access to a different part of the middle ear. Accordingly, it has been possible to diagnose any structural and pathological abnormalities as well as repair defects in different components of the middle ear. Few studies showed the superiority and feasibility of the endoscopic technique when comparing the benefits of microscopic and endoscopic treatment approaches for the management of the middle ear abnormality in patients with COM (9,10).It seems that the utilization of the endoscopic approach in the treatment of the patients with COM can effectively cover the potential limitations of interventions guided by microscopic approaches. However, insufficient data are available in therapeutic benefits and long-term outcomes of endoscopic versus conventional microscopic approaches for the treatment of COM. As a result, the present study aimed to compare the success rates and postoperative outcomes of endoscopic versus microscopic procedures in patients suffering from COM. Study Design This retrospective study was conducted on patients with chronic otitis media, who underwent cartilage myringoplasty at a tertiary referral center between 2014 and 2016 by one surgeon. Study Population A total of 130 patients, who underwent unilateral cartilage myringoplasty were included in this study. A thorough history and physical examination were performed on all the subjects by an otologist. Moreover, all patients were clinically and radiologically evaluated, and no subject demonstrated evidence of cholesteatoma, inflammation, and otorrhea before the surgery. The Pure Tone Audiometry was conducted for each patient, and subjects with mild conductive hearing loss (pure-tone average < 40 dB HL) were selected for the study. This hearing threshold level was used as the baseline measurement. Regarding the exclusion criteria, the patients with the adhesive or atelectatic middle ear, and those who had cholesteatoma or granulation tissue, only hearing ear, revision surgery, and prolongation of surgery due to non-surgical causes were excluded from the study. Surgical Procedures This study aimed to analyze the patient's medical records using one of the two therapeutic procedures, including transcanal endoscopic and postauricular microscopic myringoplasty. The transcanal endoscopic myringoplasty was performed under general anesthesia. The rigid endoscope (4.0-mm, 0°, 18-cm-long lens, HOPKINS ® telescope, Karl Storz Gmbh and Co.KG, Tuttlingen, Germany) was used for this purpose. Unlike the traditional methods, the tympanomeatal flap was not elevated, and the edges of the perforation were freshened up under the endoscopic guide. The middle ear mucosae and ossicles were checked by 0° and 30° endoscopes. In cases in which the size of the perforation limited the assessment of the middle ear, a radial incision was performed at the posterior-superior side of the tympanic membrane. This incision made it possible to assess the middle ear condition using a wideangle telescope (Fig.1). The cartilage graft was harvested from the concha by preserving the perichondrium at one side. The thickness of the cartilage was reduced to 0.5 mm by the slicer. Subsequently, after placing the gel foam in the middle ear, the graft was placed medial to the edges of the perforation and annulus, and the perichondrium faced the lateral side (palisade cartilage myringoplasty underlay method). Only a thin layer of the gel foam was placed over the graft at the end. The microscopic cartilage myringoplasty was also performed under general anesthesia. After performing the postauricular incision, the tympanomeatal flap was elevated under the guide of the surgical microscope (Carl Zeiss OPMI microscope, Germany). After freshening the edges of the perforation and evaluation of the middle ear structures, the conchal cartilage was harvested, trimmed, and placed as discussed above using the underlay method (11). All the surgeries in each group were performed exclusively by an endoscope or microscope, and no change was made in the surgical method. In both groups, only nonsteroidal anti-inflammatory drugs (Ibuprofen 200-400 mg) were administered to all patients twice a day on the first-day postsurgery. Data Collection The demographic characteristics and preoperative symptoms (i.e., otorrhea, otalgia, tinnitus, and vertigo) were determined using a retrospective review of the patients' medical records in the hospital. The size of the perforation (i.e., small: < 25%, moderate: 25% to 75%, and large :> 75% of the surface of the tympanic membrane), site of the perforation (i.e., anterior, posterior, and central) and duration of anesthesia (from the induction to extubation) were assessed by evaluation of intraoperative video-recordings. Moreover, the postoperative pain was evaluated by visual analog scaling (VAS) method that rated the severity of the pain between 0 (for no pain) and 10 (for the worst pain imaginable) one day after surgery.The postoperative visit notes were assessed 1, 6, and 12 months after the surgery for any complication, presence of otorrhea, and surgical success which was defined as graft taking and absence of perforation (Table.1). Furthermore, the mean values of the air-bone gap (ABGs) were calculated at 0.5,1,2, and 4 kHz. The postoperative ABG minus baseline ABG was assessed to compare the hearing gain (Table.2). The data were assessed in SPSS software (version 16, SPSS Inc., Chicago, USA), and the results were presented as mean±SD for quantitative variables as well as the median and interquartile for categorical variables. In addition, the categorical variables were compared using the Chi-square test or Fisher's exact test, and t-test or Mann-Whitney U test were employed to compare the quantitative variables. A p-value less than 0.05 was considered statistically significant. Ethical Considerations This anonymized chart review was conducted after obtaining ethical approval from the Ethics Committee of the relevant University of Medical Sciences (IR.IUMS.REC 1396.96-06-31-28397). It is worth mentioning that this study was carried out by tenets of the Declaration of Helsinki. Patients' Characteristics Initially, 180 patients with COM and simple perforation of the tympanic membrane were candidates for cartilage myringoplasty with concha cartilage using the endoscopic or microscopic approach, which was administered randomly. A total of 50 patients were excluded from the study due to the presence of exclusion criteria, and finally, 130 patients (94 females and 36 males with a mean age of 39.18 years; age range:18-77 years) were included in the study and divided into endoscopic (n=75) and conventional microscopic groups (n=55). Based on the patients' statements, the last episode of otorrhea was 9.3±13.3 months before surgery (8.0±9.3 and 11.0±17.3 months for the endoscopic and microscopic groups, respectively). There was no difference between the groups regarding demographic characteristics, preoperative ABG values, pre-operative symptoms, and side of the surgery. Although the history of aural discharge showed a difference between the two groups, all patients were free of discharge for at least three months; however, it did not seem to be an important finding. Intra-operative Findings In total, small, moderate, and large perforation were revealed in 18.5%, 46.2%, and 35.4% of the patients, respectively. Regarding the site of perforation, 10.0%, 33.8%, and 56.2% of the patients suffered from anterior, posterior, and central perforation, respectively. Moreover, there was an anterior canal wall overhanging in five patients (two in the endoscopic group, and three in the microscopic group). In the microscopic group, all three patients needed drilling and canaloplasty; however, canaloplasty was not performed in the endoscopic group despite the overhang. Furthermore, the mean duration of anesthesia was significantly shorter in the endoscopic group, compared to the microscopic group (76.7±38.8 and 161.0±41.4 min, respectively, P< 0.001). There was a correlation between the size of the perforation and duration of anesthesia (94.8, 100.4, and 137.7 min for small, moderate, and large perforations, respectively, P<0.001). The size of the perforation correlates with the duration of anesthesia in the endoscopic group (45.0, 75.4, and 101.0 min for small, moderate, and large perforations, respectively, P<0.001); however, the perforation sizes were not significant in the second group (P>0.901). Nonetheless, the site of operation had no effects on the duration of anesthesia in both groups (P>0.293, and P>0.245 for the endoscopic and microscopic groups, respectively). Postoperative findings The mean postoperative VAS scores were 2(1-2) and 4 (3)(4)(5) in patients who underwent endoscopic and microscopic surgeries, respectively. Moreover, postoperative pain was significantly lower in patients who underwent endoscopic surgery (P<0.001). The success rate of operation was totally 96.9% with the failure of closing perforation in four cases. Additionally, the success rates of operation in endoscopic myringoplasty and microscopic surgery were determined at 97.3% and 96.4%, respectively, with no significant difference between the groups (P>0.05). Regarding the assessment of the relationship between operation success rate and baseline parameters (i.e., gender, age, the side of surgery, and size or site of perforation), there was no significant association between the groups in this regard. The granulation tissue and mild otorrhea at the site of the canal incision occurred in four patients in the microscopic group, which was controlled with topical medication and regular debridement. In the endoscopic group, no complications were present with the healing process. No significant difference was also reported between the two groups in terms of ABG values, as well as 1, 6, and 12 months postoperatively. Regarding the hearing gain, the obtained values were 6.92, 8.37, and 9.46 dB, 1, 6, and 12 months after surgery, respectively. The hearing gain was slightly higher in the patients who underwent endoscopic surgery (7.9, 9.09, and 10.05 dB, respectively), compared to those who scheduled for conventional microscopic approach (6.21, 7.55, and 8.29 dB, respectively); however, the difference was not statistically significant (P=0.063) (Fig.2). Discussion In a study conducted by Huang et al, in 2016, despite the similarity in postoperative perforations and equal improvements in hearing and ABG, the utilization of endoscopic technique led to less perioperative nausea or vomiting as well as a shorter operative time, compared to the microscopic approaches. Furthermore, they observed fewer tissue injuries and better cosmetic outcomes. Additionally, they suggested endoscopic myringoplasty as a better choice, compared to the conventional method (12). Similarly, Farahani et al. (2015) performed a study on patients over 15 years old and indicated that increased visibility of the middle ear structures (the epitympanic, posterior mesotympanic, and hypotympanic spaces) was the main advantage of endoscopic assessment, compared to the microscopic technique. Postoperative evaluation of the middle ear by endoscope revealed residual disease in four out of 13 patients after surgery. They emphasized using endoscope on searching hidden areas of the middle ear to prevent recurrences on specific pathologies (13). Daneshi et al. published the results of their study comparing two groups of patients, who underwent stapes surgery under a microscope and exclusively endoscopic approaches. They achieved similar hearing levels with shorter operating time and more patients' satisfaction in the endoscopic group, compared to the traditional method (14). In another study, Daneshi et al. demonstrated the feasibility and advantages of the endoscope in performing same-day bilateral tympanoplasty. Moreover, they achieved similar results after using the microscopic methods despite the ease of performing bilateral simultaneous surgery with an endoscope (15). In the same vein, Daneshi et al. has recently conducted a study managing class I and II glomus tympanicum tumors of the middle ear with an endoscopic approach. They were able to acquire improvement in the conductive hearing without any sensory neural hearing loss after surgery (16). In another study carried out by Ulku et al., the postoperative mean ABG was significantly higher in the endoscopic group, compared to the microscopic group (17). Lade et al. conducted a study on two groups of patients undergoing myringoplasty using endoscope and microscope. In the microscopic group, nine out of 30 patients required canaloplasty due to either external auditory canal overhang or ossicular assessment; nonetheless, in the endoscopic group, canaloplasty was not performed on any patient. Finally, they report similar graft take rate and audiometry results after 24 weeks of follow up. However, the lower rate of canaloplasty and better cosmetic outcome in the endoscopic group suggest endoscopic myringoplasty as an effective alternative for the conventional microscopic method with similar hearing benefits (18). In a retrospective study performed by Kuo et al., the comparison of the endoscopic and microscopic tympanoplasty demonstrated the feasibility of endoscope in tympanoplasty with the same benefit of hearing improvement and success rate with comparable complication rate. Moreover, the endoscopic group experienced much shorter operation time, smaller operation wounds, and lower medical costs (19). According to a retrospective study conducted by Choi et al., the surgical outcomes of type one tympanoplasty using two approaches were compared with each other. The patients were followed for three months, and they could show similar audiometric results, postoperative pain levels, and graft take rates. The study revealed shorter operation time and lower pain at the first postoperative day in the endoscopic tympanoplasty group, compared to the microscopic tympanoplasty group (20). In a recent study carried out by Kaya et al. (2017), the feasibility of using the endoscope was shown to repair the tympanic membrane perforation in uncomplicated COM patients. In their endoscopic method, they did elevate limited tympano-meatal flap (described as limited to 1 to 6 o'clock and not extending to maleus) which led them to good hearing results and low complication rates (20). According to a currently conducted study, the superiority of the endoscopic approach over the conventional microscopic method was seen regarding lower postoperative pain severity and lower time for anesthesia. In the microscopic group, canaloplasty and drilling the anterior overhanging was mandatory in three cases, which was not necessary in two cases with an overhung canal in the endoscopic group. Furthermore, a slightly higher postoperative hearing gain was observed in the endoscopic group, which was not statistically significant. However, no reasonable explanation was found for this finding although the same method of grafting and graft materials was utilized in both groups. It seems that the transcanal approach without elevating tympano-meatal flap in the endoscopic group was responsible for less postoperative pain and faster recovery. As mentioned above, in four cases, granulation tissue and otorrhea were observed postoperatively in the microscopic group and none in the endoscopic group, which could be due to the canal incision. These mentioned findings could be the benefits of not elevating the tympano-meatal flap, which makes the procedure much easier, thereby accelerating the healing process. A great advantage of using the endoscope was reported to assess the middle ear and ossicular chain before performing myringoplasty. As described before, the utilization of a radial incision on the remnant of the tympanic membrane in small perforations facilitated better exposure to middle ear contents. Despite extending the incision, no complications were observed after using this method. However, endoscopic ear surgery has drawbacks, such as working with one hand, obscuring vision with minimum bleeding, and lack of three-dimensional views. The surgeon can overcome these disadvantages with experience over time. Most previous studies could obtain similar findings, compared to the outcomes of the current study. It seems that almost all superiorities of endoscopic myringoplasty compared to the conventional method refer to more visibility of middle ear parts by using the former tool. However, the results obtained in the present study and previous studies might be influenced by potential confounders, such as the experience of the surgeon. Regarding the limitations of the study, one can name the selection of the patients from one center, and non-randomized recruitment of the participants, as well as the unblinded nature of the review. These may increase the selection bias and the possibility of confounding the results. Therefore, it is suggested to use transcanal myringoplasty in these cases to decrease post-operation pain. Conclusion In conclusion, there was no significant difference between endoscopic myringoplasty and microscopic surgery in this study. Moreover, similar operative success rates were reported regarding hearing gain. However, in addition to a higher ability to observe different components of the middle ear in the endoscopic approach, the application of this technique can lead to a more favorable outcome concerning a faster recovery to treat COM. Despite a small difference in the pain scores, it seems that endoscopic ear surgery is on average faster than the microscopic approach, which needs further investigation.
2020-10-02T05:06:10.684Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "6eba03f1076cc2b92dd3808445a6b8458b9e822a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6eba03f1076cc2b92dd3808445a6b8458b9e822a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7841178
pes2o/s2orc
v3-fos-license
Structure of Protein Geranylgeranyltransferase-I from the Human Pathogen Candida albicans Complexed with a Lipid Substrate* Protein geranylgeranyltransferase-I (GGTase-I) catalyzes the transfer of a 20-carbon isoprenoid lipid to the sulfur of a cysteine residue located near the C terminus of numerous cellular proteins, including members of the Rho superfamily of small GTPases and other essential signal transduction proteins. In humans, GGTase-I and the homologous protein farnesyltransferase (FTase) are targets of anticancer therapeutics because of the role small GTPases play in oncogenesis. Protein prenyltransferases are also essential for many fungal and protozoan pathogens that infect humans, and have therefore become important targets for treating infectious diseases. Candida albicans, a causative agent of systemic fungal infections in immunocompromised individuals, is one pathogen for which protein prenylation is essential for survival. Here we present the crystal structure of GGTase-I from C. albicans (CaGGTase-I) in complex with its cognate lipid substrate, geranylgeranylpyrophosphate. This structure provides a high-resolution picture of a non-mammalian protein prenyltransferase. There are significant variations between species in critical areas of the active site, including the isoprenoid-binding pocket, as well as the putative product exit groove. These differences indicate the regions where specific protein prenyltransferase inhibitors with antifungal activity can be designed. focus of cancer chemotherapeutic research for over a decade. Two FTase inhibitors, Lonafarnib (Schering) and Tipifarnib (Johnson & Johnson) have advanced to late-stage clinical trials (10) for the treatment of cancer. In lower eukaryotes, prenylation of essential signal transduction proteins is also required for function. In pathogenic microorganisms, such as C. albicans, disruption of protein prenylation of such essential cellular proteins has the potential for the development of new antifungal medications (5)(6)(7)(8)23). The Ram2 gene in C. albicans encodes the common ␣-subunit of FTase and GGTase-I; knock-out mutations of this gene are lethal (5). As in mammalian cells, knock-out of a single ␤-subunit is not lethal, because the remaining CaaX prenyltransferase can cross-prenylate non-cognate substrates (7). Even so, at least one series of selective CaGGTase-I inhibitors has shown significant antifungal activity, suggesting that impairment of one of the CaaX prenyltransferase may be sufficient for effective treatment (23). In C. albicans, Rho family substrates of the GGTase-I regulate cell wall biogenesis, while Ras family substrates of CaFTase have been strongly implicated in virulence by regulating the transition from yeast to hyphae (5). Selective inhibitors of C. albicans CaaX prenyltransferases would therefore provide tools for understanding morphological changes in this yeast as well as potential antifungal treatments. Here we present the crystal structure of the CaGGTase-I at 1.8-Å resolution in complex with its cognate lipid substrate, geranylgeranylpyrophosphate (GGPP). This structure reveals the structure of a non-mammalian protein prenyltransferase and provides high-resolution structural insights into the evolution of the protein prenyltransferase mechanism in lower eukaryotes and mammals. Although the yeast enzyme shares a similar overall architecture with its mammalian ortholog, close inspection of the active site reveals areas of significant divergence in critical regions. In particular, mechanisms of isoprenoid selection differ in the two enzymes; the putative product exit groove also varies significantly. We postulate that the unique structural features of CaGGTase-I will provide opportunities to design selective inhibitors for the development of new anti-fungal therapeutics. EXPERIMENTAL PROCEDURES Cloning and Protein Expression-C. albicans genomic DNA (strain SC5314) was obtained from the American Type Culture Collection. The RAM2 gene encoding the ␣-subunit was amplified from the genomic DNA using standard PCR methods and Platinum Pfx High Fidelity polymerase (Invitrogen). The forward primer sequence (SalI restriction site underlined) was 5Ј-CTGACGCCATGGATGACAGACTCCAAA-TATGAC-3Ј and the reverse primer sequence (NotI site underlined) was 5Ј-CATTATGCGGCCGCTTACACCGA-TGTGAG-3Ј. The insert was digested with restriction enzymes SalI and NotI (New England Biolabs) for subcloning into the expression vector. Amplification of the CDC43 gene encoding ␤-subunit was challenging because of the particularly AT rich sequence. The insert was amplified in two steps. First, primers were designed to amplify from regions flanking the gene, ϳ250 bases upstream and downstream of the coding sequence. PCR was performed using the Phusion High Fidelity DNA polymerase (New England Biolabs) using the manufacturer's suggested procol, with the forward primer 5Ј-TCAAACCGGCTTCTTCAAGT-3Ј and the reverse primer 5Ј-TGTTGATTGTGTGTGTGGGA-3Ј. The resulting fragment of ϳ1.7 kb was used as the PCR template for another round of PCR amplification using TaqDNA polymerase (Invitrogen) according to the manufacturer's protocol with the following primers: forward, 5Ј-GCG-CTGCATATGAACCAACTGCTGATTAACAAACATGAG-AAATTTTT-3Ј (NdeI restriction site underlined); reverse, 5Ј-GCGCTGCTCGAGTTAATACTTTATTTTTTCTTTAA-AAAATTGATACGATTCTTTTGTAATTG-3Ј (XhoI site underlined). The resulting PCR product was cloned into the PCR2.1 TOPO vector using the TOPO TA cloning kit from Invitrogen. Plasmids isolated from colonies positive for insert FIGURE 1. A, protein prenyltransferase reaction scheme. Alkylation of the cysteine ␥ sulfur by the isoprenoid (C1 position) produces prenylated CaaX tetrapeptide product; pyrophosphate is the leaving group. B, protein prenyltransferase reaction cycle, adapted from Ref. 13, 14. Enzyme binds the lipid substrate (red) first (complex 1), followed by CaaX tetrapeptide substrate (blue) binding to generate a ternary substrate complex 2. Prenylated product complex 3 is formed, and pyrophosphate is released. A fresh lipid substrate molecule (red) then binds to initiate a new turn of the cycle and to complete the current by displacing prenylated product complex into a product exit groove (green) to generate 4. Displaced product is released from the active site allowing binding of a new CaaX substrate and continuing the cycle. incorporation were digested using restriction enzymes XhoI and NdeI (New England Biolabs) to liberate the CDC43 insert. The pCDFDuet-I vector (Novagen) was chosen to co-express both subunits of the enzyme in Escherichia coli (E. coli); the vector contains two multiple cloning sites (MCS) under the control of separate isopropyl-1-thio-␤-D-galactopyranosideinducible T7 promoters for robust co-expression of two gene products. The digested RAM2 gene encoding the ␣-subunit was subcloned into the SalI and NotI restriction sites in MCSI of the pCDFDuet-I vector. The digested CDC43 gene encoding the ␤-subunit was subcloned into the NdeI and XhoI restriction sites of MCSII of the pCDFDuet-I vector to achieve the final expression construct. The Duke University Medical Center DNA Analysis Facility performed the sequence analysis to confirm error-free construction of the expression vector. The final construct was transformed into C41 (DE3) Escherichia coli (AVIDIS, S.A.) for expression. A single colony was picked from the plate and grown overnight in a 50-ml LB culture supplemented with 50 mg/ml streptomycin. The 50-ml LB culture was used to inoculate 2 liters of LB media supplemented with streptomycin until the A 600 reached 0.8, at which point the culture was induced with a final concentration of 1 mM of isopropyl-1-thio-␤-D-galactopyranoside for 4 h at 37°C. The culture was supplemented with 300 M Zn(SO 4 ) 2 at induction. Cells were harvested at 6000 ϫ g, and the cell paste could be flash-frozen in liquid nitrogen and stored at Ϫ80°C for several months. Protein Purification-The cell paste was resuspended in a 10-fold volume of Buffer A (20 mM Tris, pH 7.7, 5 mM dithiothreitol, 5 M ZnCl 2 ) supplemented with SigmaFast general use protease inhibitor tablets (Sigma). Cells were lysed using a pressurized homogenizer (Microfluidics Corp.) and the resulting crude lysate was clarified by centrifugation at 45,000 ϫ g for 30 min. The lysate was first applied to a DEAE Sepharose column and fractionated using gradient from Buffer A ϩ 150 mM NaCl to Buffer A ϩ 300 mM NaCl over 8 column volumes. The fractions containing CaGGTase-I as determined by SDS-PAGE were pooled and brought to a final concentration of 0.8 M (NH 4 ) 2 SO 4 by 2-fold dilution with Buffer A ϩ 1.6 M (NH 4 ) 2 SO 4 . As with the purification of mammalian GGTase-I, a 2-fold molar excess of GGPP was added to the pooled fractions at this point as well, and the mixture was stirred at 4°C for 20 min before application to the column. The GGPP putatively displaces nonspecifically bound lipids in the active site of the molecule and results in a narrower elution peak from the phenyl-Sepharose column. After incubation with GGPP, the protein was applied to a phenyl-Sepharose column pre-equilibrated in Buffer A ϩ 0.8 M (NH 4 ) 2 SO 4 , and fractionated using a gradient of Buffer A ϩ 0.8 M (NH 4 ) 2 SO 4 to Buffer A over 8 column volumes. The fractions containing CaGGTase-I (SDS-PAGE) were pooled and adjusted to 10 mS/cm conductivity by dilution in Buffer A (Thermo Scientific conductivity meter) and applied to a Q-Sepharose column pre-equilibrated in Buffer A ϩ 150 mM NaCl. The protein was fractionated using a gradient from Buffer A ϩ 150 mM NaCl to Buffer A ϩ 350 mM NaCl over 8 column volumes. The fractions containing CaGGTase-I (SDS-PAGE) were concentrated to 0.5 ml total volume using a centrifugal concentrator (50-kDa MWCO, Amicon), and applied to a 120-ml Superdex 16/10 gel filtration column equilibrated in Buffer A. The final fractions containing CaGGTase-I (SDS-PAGE) were concentrated again in a centrifugal concentrator (50 kDa MWCO, Amicon) to 15 mg/ml, flash-frozen in liquid nitrogen, and stored at Ϫ80°C. Typical yield was ϳ3-5 mg of purified CaGGTase-I per liter of E. coli culture. Crystallization and Data Collection-GGPP was added to an aliquot of protein at a 0.5:1 GGPP/protein ratio for 30 min prior to setting up the crystallization drop. CaGGTase-I crystals were grown in hanging drop format in which 1 l of protein solution was mixed with 1 l of well solution consisting of 25% PEG 1500 and 1ϫ PCB buffer, pH 7.0 (100 ml of 10ϫ PCB contains 3.84 g of sodium propionate, 4.28 g of sodium cacodylate, and 11.29 g of Bis-Tris propane). Crystals appeared within 2-3 days and grew as long thin rods, with typical dimensions 400 m ϫ 50 m ϫ 50 m. Prior to data collection, crystals were transferred to a stabilizing solution containing 1ϫ PCB buffer pH 7.0, 30% PEG 1500, followed by cryoprotection in stabilizing solution plus 10% ethylene glycol. Crystals were flash-frozen in liquid nitrogen. Diffraction data were collected at Southeast Regional Collaborative Access Team (SER-CAT) 22-ID beamline at the Advanced Photon Source, Argonne National Laboratory at 100 K. Crystals diffract beyond 1.7-Å resolution; a complete data set was collected to 1.8-Å resolution (Table 1). CaGGTase-I crystallized in space group C2 (a ϭ 132.3 Å, b ϭ 66.05 Å, c ϭ 82.8 Å, ␣ ϭ ␥ ϭ 90.0°, ␤ ϭ 100.0°) with one molecule in the asymmetric unit. HKL2000 was used for data reduction and scaling. Structure Solution and Refinement-The CaGGTase-I structure was determined by molecular replacement using PHASER (24). A homology model derived from the rat GGTase-I structure (PDB code 1N4P, chains A and B, (14)) was constructed using MODELLER (25) and subsequently used as the search model. A cycle of simulated annealing was then performed on the solution in CNS (26), which gave an R factor of 41.4%. ARP/ wARP (27) was then used to retrace the model, properly fitting regions not successfully fit into the density by the simulated annealing refinement; iterative automated chain tracing with ARP/wARP and refinement in REFMAC5 (28) RESULTS Overall Structure of C. albicans GGTase-1-CaGGTase-I is an 82-kDa heterodimer consisting of 37-kDa ␣-subunit (306 residues) and a 45-kDa ␤-subunit (390 residues). Fig. 2A shows the overall structural features of the enzyme. The sequence identities with respect to the mammalian ortholog are 28 and 25% for the ␣and ␤-subunits, respectively. Despite this low overall sequence identity, the structure of CaGGTase-I is quite similar in overall architecture to mammalian GGTase-I (1.58 Å r.m.s.d. calculated over all aligned ␣-carbon atoms, Fig. 2B). The CaGGTase-I ␣-subunit is smaller than the mammalian protein (306 versus 377 amino acids). CaGGTase-I lacks an N-terminal domain rich in proline and glutamine residues present in the mammalian enzyme. This domain is disordered in all mammalian FTase and GGTase-I structures determined to date. Like the mammalian enzyme, the CaGGTase-I ␣-subunit consists of ␣-helices. There are 16 helices altogether, one short helix more than in mammalian enzyme, arranged in antiparallel pairs to form a crescent that envelopes part of the ␤-subunit. We observe variation in helix length in the ␣-subunit. Helices 5␣ and 8␣ extend one helical turn longer than the corresponding mammalian helix, and helix 12␣ is approximately one helical turn shorter. Five amino acids are inserted between the equivalents to12␣ and 13␣ in the mammalian enzyme and form an additional short helix in CaGGTase-I (13␣). The ␤-subunit is slightly larger than that of mammalian GGTase-I (390 residues versus 377 for human GGTase-I). Variation in loop length between the helices in the ␤-subunit accounts for the additional residues. The ␤-subunit is predominantly ␣-helical forming an ␣-␣ barrel with a central, largely hydrophobic cavity, which contains most of the residues that bind substrate and coordinate the catalytic Zn 2ϩ ion (Fig. 1). The CaGGTase-I ␤-subunit also has two short antiparallel ␤ sheet regions (residues 67␤-76␤ and141␤-154␤) remote from the active site, which do not appear in the mammalian enzyme. Two loops (residues 176␤-188␤, 261␤-272␤) and several residues at the termini (1␣-2␣, 305␣-306␣; 1␤) are not visible in the final electron density maps. The disordered loops correspond to two areas of allelic and strain variation in the ␤-subunit: the region 176 -188 contains a polyasparagine tract that can vary between 6 and 17 residues, depending on strain and allele. The variant described here contains six asparagines. The region 261-272 contains a region rich in asparagine, aspartate, and glycine. The number of repeats of these residues also varies across strains and alleles (8); the amino acid sequence of the variant reported here is KDGNGDNGNGDN. Isoprenoid Substrate Binding and Selection-Like mammalian GGTase-I (14), CaGGTase-I binds its isoprenoid substrate in a hydrophobic groove on one side of the active site cavity. The diphosphate moiety forms hydrogen bonds to lysine, arginine, and histidine residues, which are all mostly conserved across species. Although conformation of the GGPP lipid substrate is similar to mammalian GGTase-I (Fig. 3A), the binding mode of the fourth isoprene in CaGGTase-I varies significantly (Fig. 3A). In mammalian GGTase-I, this fourth isoprene is directed toward the CaaX substrate-binding pocket adjacent to the isoprenoid pocket; in the yeast GGTase-I, the phenylalanine 99␤ is bulkier than threonine 127␤ at the equivalent position in rat GGTase-I, directing the fourth isoprene away from the CaaX-binding site toward the edge of the active site cavity. This conformational difference suggests that the C. albicans prenyltransferases have an alternative mechanism for isoprenoid substrate selection. In mammalian FTase and GGTase-I a single residue (W102␤ in FTase and T49␤ in GGTase-I) dominates selection of isoprenoid length (14,32): the tryptophan in FTase presents a steric block to any isoprenoid lipid longer than a 15-carbon FPP, while the smaller threonine in GGTase permits binding of the additional five-carbon fourth GGPP isoprene. Mutagenesis of W102␤ to threonine converts FTase to a geranylgeranyltransferase (32). In CaGGTase-I, isoprenoid selection appears to be governed by two residues (L98␤ and L352␤), for which there are no equivalent in mammalian GGTase I. Inspection of a sequence alignment of CaGGTase-I ␤-subunit with the CaFTase ␤-subunit reveals variation at the positions equivalent to these leucine residues. CaFTase is predicted to have two tyrosines, 502␤ and 259␤, in this position. A homology model suggests that these residues impinge on the binding site for the fourth isoprene, excluding GGPP from the binding site and selecting for the shorter FPP (Fig. 3B). Zinc and Magnesium Dependence of the Prenylation Reaction-Like all protein prenyltransferases studied to date, CaGGTase-I is a Zn 2ϩ -dependent metalloenzyme (17,(33)(34)(35)(36)(37), with the Zn 2ϩ activating the cysteine thiolate of the CaaX substrate for attack on the C1 carbon of the isoprenoid substrate. The Zn 2ϩ coordination sphere is conserved and consists of an aspartic acid (D294␤), cysteine (C296␤), and histidine (H349␤), forming a distorted pentacoordinate geometry with two ligands contributed by D294␤ at 2.23 and 2.38Å, a ligand from C296␤ at 2.00Å, and a ligand from the N⑀ of H349␤ at 2.29Å. Consistent with the structures of mammalian FTase and GGTase-I (14,38), a water molecule occupies the position of a fifth ligand. In the mammalian enzymes, the water is displaced by the ␥ sulfur of the cysteine residue of the CaaX substrate upon peptide binding. Unlike the mammalian GGTase-I, CaGGTase-I is dependent on millimolar levels of Mg 2ϩ for its maximum reaction rate (8). This Mg 2ϩ dependence is also observed in both mammalian and Saccharomyces cerevisiae FTases, as well as S. cerevisiae GGTase-I (17, 39 -41). A Mg 2ϩ ion is hypothesized to stabilize the diphosphate leaving group in the chemical step of the reaction (Fig. 4A) (13,42). Modeling of a Mg 2ϩ in the crystal structure of mammalian FTase indicates that aspartate D352␤ is positioned to coordinate this ion next to the diphosphate leaving group in the modeled transition state (Fig. 4A) (13). Mutagenesis studies further support that this residue binds Mg 2ϩ (40). The structure of mammalian GGTase-I reveals that the terminal amine of a lysine side chain at the equivalent position could effectively substitute for Mg 2ϩ at this position ( Fig. 4A) (14, 39). The CaGGTase-I structure reveals that the equivalent position to D352␤ or K311␤ in the mammalian enzymes is arginine 339␤ (Fig. 4B). At neutral pH, the positive charge of arginine is delocalized over the guanidinium group. In the mammalian GGTase-I, the orientation of K311␤ is restricted by a tryptophan, W312␤, effectively directing the positive charge toward the diphosphate (14) (Fig. 4B). In CaGGTase-I, the residue adjacent to R339␤ is an aspartate, D340␤ (Fig. 4B). We propose a 2-fold effect: first, with a smaller neighbor (aspartate), the arginine can explore multiple conformations, which is supported by the observation that its guanidinium group is poorly ordered in the electron density; and second, the negatively charged D340␤ diminishes the positive charge density in this region, requiring higher Mg 2ϩ levels. The Mg 2ϩ -dependent S. cerevisiae GGTase-I has a lysine at the position equivalent to mammalian K311␤ and CaGGTase-I R339␤, but the mammalian tryptophan is replaced by asparagine N332␤ (isosteric with the CaGGTase-I D340␤), suggesting that the lysine conformation is less restricted, similar to CaGGTase-I R339␤. CaaX Substrate-binding Pocket-In the mammalian FTase and GGTase-I, the CaaX protein substrate binds in an extended conformation (Fig. 1B) with the cysteine coordinating the catalytic Zn 2ϩ ion and the C terminus anchored at the bottom of the pocket by a glutamine residue (Q167␣) (11,13,14,(43)(44)(45). In addition, the CaaX substrate makes significant van der Waals contact with the lipid substrate ( Fig. 1B) (11,13,14,44,45). We expect that the extended conformation will be recapitulated in CaGGTase-I, with the Zn 2ϩ and Q104␣ (equivalent to Q167␣ in mammalian GGTase-I) acting as anchor points. The CaGGTase-I ␤-subunit barrel forms a large central cavity, which binds the two substrates similar to mammalian GGTase-I (Figs. 1B and 2). Despite the similar overall architecture, CaGGTase-I exhibits significant variation in the identities of residues comprising the CaaX X-residue-binding site compared with the mammalian GGTase-I. Most of the residues within a 5-Å radius of the X-residue are non-conservative substitutions compared with the mammalian enzyme (14) (Fig. 5). This arrangement suggests that either there is degeneracy in the recognition of the CaaX tetrapeptide (6,8), or that the peptide substrates adopt a different binding mode, particularly with respect to the C-terminal X-residue. The structural adaptations of the binding site to accommodate the GGPP fourth isoprene are also likely to effect binding of the X-residue, because the two substrate-binding pockets are adjacent to each other and share several residues within van der Waals distance of both substrates (Fig. 1B). Product Exit Groove-The reaction path determined for mammalian GGTase-I and FTase reveals the presence of a displaced prenylated product intermediate that precedes product release (Fig. 1B) (13,14). The isoprenoid portion of this intermediate is bound in a solvent-exposed product prenylated product exit groove located adjacent to the CaaX substratebinding site (Fig. 1B). Product release for the mammalian protein prenyltransferases is the slowest step (300-fold slower relative to the catalytic step) in the reaction (19,21). The structurally determined pre-release intermediate product complex is consistent with this kinetic scheme. Aside from mammalian prenyltransferases, only S. cerevisiae FTase (46) has been characterized by pre-steady state kinetic analysis. The kinetically defined reaction path of S. cerevisiae FTase (46) is similar to the mammalian enzymes, with the notable exception that the product release step is no longer the clearly dominant slow step (3-fold slower relative to the catalytic step). This suggests that the product is not stably bound in this enzyme. Inspection of sequence alignment between the mammalian and S. cerevisiae FTase sequences reveals there is significant variation in the residues lining the exit groove, suggesting that the latter lacks this groove, accounting for the large difference in product release rate constants. Only the steady state kinetics parameters for the CaGGTase-I have been reported, and its overall turnover rate is not significantly different than is reported for mammalian GGTase-I (0.076 s Ϫ1 for CaGGTase versus 0.051 s Ϫ1 ) (6,39). The rate constants for product release are not available for CaGGTase-I. The crystal structure reveals significant variation in the exit groove compared with the mammalian enzyme (Fig. 6). In particular, residues 17-24 in CaGGTase-I, which make up one side of the putative exit groove, are positioned on average 4 Å closer to the other wall of the exit groove than is seen in the mammalian enzymes (13,14). This arrangement narrows this groove to 6.4 Å at its narrowest point, thereby presenting a steric block to a displaced isoprenylated product modeled in a similar position to that observed in mammalian GGTase-I. If a displaced isoprenylated product were part of the CaGGTase-1 reaction path, it therefore must bind shallowly in the exit groove. This arrangement is expected to change the rate constants for product release relative to the mammalian counterpart. DISCUSSION The CaGGTase-I structure reveals a high-resolution picture of a non-mammalian protein prenyltransferase. The Zn 2ϩ coordination sphere, essential for all protein prenyltransferases, is completely conserved. Despite relatively low sequence identity, there is a remarkable degree of structural conservation in regions of the protein apparently non-essential for activity. By contrast, there is variation in parts of the structure involved in the molecular recognition of both the lipid and CaaX substrates. Despite these differences in the CaaX protein substrate-binding pocket, CaGGTase-I exhibits nearly identical substrate preferences to the mammalian enzyme to the extent that different sequences have been tested (6,8). Furthermore, the CaGGTase-I structure shows that the recognition and release of isoprenylated products varies across species. In particular, the product exit groove that is observed in the mammalian enzyme and confers an unusual interplay between substrate binding and product release is lacking in the CaGGTase-I enzyme. This exit groove may not be required for monoprenylation but may instead be a necessary feature to confer the processive diprenylation observed in the in Rab GGTases (13,14). We propose that the structural differences observed between mammalian GGTase-I and CaGGTase-I will be sufficient to devise a structure-based design strategy to develop C. albicansselective GGTase-I inhibitors. The variation in the CaaX-binding pocket, particularly the X-residue recognition residues, as well as the uniquely shaped exit groove provide opportunities to define ligands, which will be selective for the C. albicans enzyme.
2018-04-03T03:43:21.178Z
2008-11-14T00:00:00.000
{ "year": 2008, "sha1": "09f8103c1af07d4d8b9ac12f9149b94cf0180f68", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/46/31933.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6e6bea1f7a1a0669bc74f796e8e1ca82433b42a5", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
203369121
pes2o/s2orc
v3-fos-license
Corporate Social Responsibility Disclosure , Environmental Performance , and Tax Aggressiveness Corporate Social Responsibility Disclosure, Environmental Performance, and Tax Aggressiveness This study aims to examine the influence of the corporate taxpayers' level of CSR disclosure and environmental performance on the level of tax aggressiveness.This study took a sample of non-financial companies listed on the Indonesian Stock Exchange during 2009-2012.This study shows that the corporate taxpayers' level of CSR disclosure has significant negative effect towards the tax aggressiveness.It means the higher the level of the CSR disclosure, the lower the company's tax aggressiveness.This study also proves that good environmental performance will strengthen the negative effect of CSR disclosure on tax aggressiveness.The assessment of environmental performance is conducted by the Ministry of Environment as independent party.It means that the higher the score of company's environmental performance, the higher the commitment to pay taxes.This study supports the view that more socially responsible corporations are likely to be less tax aggressive. INTRODUCTION Corporate Social Responsibility (CSR) is a concept that started growing since the 1950s.Howard Bowen, American economists in 1953 stated that a businessman should have the responsibility to promote hope, purpose and values in society (Hartanti 2006).Some views relate the concept of CSR and corporate taxes because CSR is corporate expenditure for the benefit of stakeholders and taxes paid by the company is also the expenses paid to the government for the benefit of society. Because of similar purpose of this expenditure, Avi-Yonah (2008) stated that the determination of corporate tax policy is influenced by the company's perspective on corporate responsibility to the community in the form of CSR. The relation between corporate's perspective on CSR and corporate's policies on taxes generate some research linking CSR and tax avoidance (Huseynov and Klamm, 2012;Sikka 2010;Hasseldine and Morris 2012), CSR and tax aggressiveness (Lanis and Richardson 2012) and company's taxes motivation in CSR (Carrol and Joulfaian 2005).These studies resulted in mixed findings.Huseynov and Klamm (2012) found that companies that have good relationship with community tend to not doing tax avoidance, while companies that do not have a good relationship with the community tend to do tax avoidance.Sikka (2010) found that a company with good CSR turned out tax evasion.However, study conducted by Lanis and Richardson (2012) on CSR disclosure stated that a company with good disclosure will have low level of tax aggressiveness.Lanis and Richardson (2012) stated that the term aggressiveness of tax, tax avoidance and tax management has the same meaning.From the findings above, there is a consensus that company's policies on CSR affect the amount of taxes paid by the company.However, the results still inconclusive , whether company's CSR policy has positive or negative effect on the payment of corporate taxes. In Indonesia, there were pros and cons whether Corporate Social Responsibility Various parties explain their views on CSR.Elkington (2007) proposed the concept of the triple bottom line (people, profit, planet).Holme and Watts (2006) stated that CSR is the commitment of business to act ethically, contribute to economic development, and improve the quality of life of workers, local communities and society. One of the most influential literatures in CSR is written by Carroll (1979) and refined in 1991 that proposed the CSR pyramid which consist of economic, legal, ethical and philanthropic.The meaning of the pyramid is a company that engage in CSR will work to generate profit, obey the law, behave ethically and be good company. According to Avi-Yonah (2008) and Schon (2008), a company is a real-world entity that must survive in a competitive business environment and should be associated with many entities and individuals. A company will develop policies, strategies and operations that are not merely centered on shareholder welfare but also for stakeholders (government, politicians, trading communities, employees, suppliers and customers) and public community.Porter and Kramer (2006) stated that the company that has high social responsibility will have good image, strong brand and increasing in the value of company. It can be concluded that CSR not only includes the responsibility to stakeholders and public, but also the implementation of good business ethics by the company.In Indonesia, various studies have linked CSR with a variety of variables, such as financial performance (Wijayanti and Prabowo 2011) and earnings respond coefficient (Sayekti and Wondabio 2007). Tax Aggressiveness Lanis and Richardson (2012) stated that the aggressiveness of the tax, tax evasion and tax management is a term that refers to the same meaning.Frank et al . (2009) defines tax aggressiveness as management efforts to reduce taxable income through tax planning activities via legal, illegal, and in between (gray area).Hanlon and Hetzman (2010) defines tax evasion as a tax reduction and highlight the broad scope of tax evasion, the tax management, tax planning, tax aggressiveness, tax evasion and tax sheltering. Additionally, Hanlon and Hetzman stated that positive book-tax difference (BTD) and lower effective tax rate reflects the tax evasion behavior. On the one hand, the tax is an expense and the company is trying to do the management of tax or tax planning to reduce costs, increase profitability and shareholder value.On the other hand, companies involved in the tax shelter or tax evasion and make decisions based solely on the desire to reduce tax referred to as a company that does not have a social responsibility (Schon 2008). One way to do a tax management is to use the services of a tax consultant.Research Mills (1998) showed that companies that use the services of consultants have a low effective tax rate.Due to tight connection between firm's view of CSR and corporate tax policy, some researchers relate CSR and tax avoidance.Huseynov and Klamm (2012) found that firms having good public relationship tend to not make any tax avoidance, otherwise the ones who don't tend to make some.Lanis and Richardson (2012) Since 1995, the Ministry of Environment has evaluated firms' environmental performance in In-donesia.Some researchers investigate the association between environmental performance and the level of CSR disclosure.Suratno (2006) stated that there is positive relationship between firm's PROPER rating and its CSR disclosure.Al Tuwajiri and Sulaiman (2003) stated that there is positive relationship between firm's environmental performance rating and its CSR disclosure.Based on findings above, we can conclude that the companies with good environmental performance will report high level disclosure of CSR to explain its CSR activities.We can also conclude that the companies with good environmental performance have higher commitment to do the CSR activities and it reflected in their CSR disclosure.Their commitment will prevent them to do the tax aggressiveness.So, it is predicted that good environmental performance will strengthen the negative effect of CSR disclosure on tax aggressiveness. CSR and Tax Regulation in Indonesia Based on literatures above, hypothesis that could be developed is as below: H2: Environmental performance strengthens negative effect of firm's Corporate Social Responsibility disclosure towards corporate tax aggressiveness. Sample and Research Data This research uses sample of listed firms in BEI (Indonesia Stock Exchange).The data for this research is retrieved from: Listed Period of Firm (AGEPUB) Firm that has just been listed on stock exchange would be looked forward to have good financial performance, so that the firm would tend to apply tax aggressiveness.(Lanis and Richardson 2012). Firm that has long been listed would be more conform to the regulation in the stock exchange (Beasley 1996).AGEPUB is measured by listing period of the firm on BEI (Indonesia Stock Exchange).AGEPUB is predicted to negatively affect tax aggressiveness (Current ETR). Block Holder (BHD) Shleifer and Vishny (1986) From 18 sample firms, mean of PROPER rating of firms attending PROPER is 3.13.From all firms listed on BEI (Indonesia Stock Exchange), the number of firms having full PROPER rating for three years is 18 firms with average PROPER score of 3. Score 3 denotes that those firms acquire blue rating, in which is the third rating below gold and green.Blue rating implies that firms have put serious efforts on entailed environmental management in accordance with prevailing provisions or regulations. Statistical Test Result of Hypothesis 1 Regression test was undertaken with panel data by using eviews.This research used Chow Test to determine whether data processing is better Sampel Description Jumlah Hypothesis 1 Non-financial companies listed on the Stock Exchange in 2012 379 Incomplete company financial data incomplete annual report data from 2009-2012 (141) Companies that have negative earnings before tax, tax is now zero, the current tax ratio above 1 and their tax refund (49) Hausman Test, the result showed that the model of hypothesis 2 using Random Effect.Table 4 provides regression result of hypothesis 2. From regression result without moderating, it is noticed that PROPER negatively influence current ETR exhibits that the higher the PROPER rating, the lower the tax aggressiveness.This demonstrates that firms holding decent environmental performance would be more ethical and tend to not exercise tax avoidance.This result also support the view that more responsible and care the company on their environment, the less the tax aggressiveness. Regression result from disclosure level of CSR moderated with environmental performance shows negative coefficient means that the higher companies should be required to engage in CSR activities.The pros and cons arose when the government issued Act No. 40 of 2007 on Limited Liability Company that requires company conduct its business activities in areas related to natural resources to implement social and environmental responsibility.The opponents argue that the company already pays taxes and the tax is spending for the benefit of society.Additional cost of engaging in CSR will become the company's expense and reduce the competitiveness of companies (Djimanto 2007).But in the end the government issued Act No. 36 of 2008 which states that expense on CSR activities can be deducted from taxable income.The act states that the amount of taxable income for domestic taxpayers is determined based on gross income less costs to acquire, collect, and maintain income, including the cost of CSR such as donations to the national disaster, donations for the research and development carried out in Indonesia, the cost of construction of social infrastructure, donations for educational facilities, and donations for developing sport activities.Further explanation about CSR expenses that can be deducted from gross income is regulated in Government Regulation No. 93 in 2010.However, although at first there are pros and cons, CSR development in Indonesia is increasing.As an illustration, a company that publishes sustainability reporting reports increased by 100% over the 6 years from 2005 to 2011 (ISRA 2011).The Government through the Ministry of Environment also increases the supervision of the company which has impact on the environment using tool which is called the Environmental Performance Rating in Environmental Management (acronym in Indonesia is PROPER).From 2009 to 2011 PROPER participants increased 45% from 690 to 995 companies.The degree of compliance of PROPER participants in 2011 had reached 66% means that 66% of companies already meet the criteria of compliance (PROPER Report 2011).The lack of research in Indonesia that investigate the influence of disclosure of CSR performance and score of environmental performance on the level of corporate tax aggressiveness raises motivation to examine how the influence of the level of CSR disclosure and environmental performance of companies on the level of tax aggressiveness. Government has started to set deductible Corporate Social Responsibility (CSR) expenses from company's income by issuing PP (Government Regulation) No. 93 Year 2010.Forms of CSR expenditures which are tax deductible expenses are as follows: a. Donation for national disaster management, b.Donation for research and development.c.Donation for educational facility, which is a donation of educational facilities which are distributed through educational institutions; d.Donations in order to develop the sport.e. Cost of social infrastructure development which are costs incurred for the purpose of developing infrastructure for public and nonprofit interests.CSR expenditure in form of donation and/or any expenses as mentioned above could be deducted from gross income with some requirements.The amount of donation and/or social infrastructure development expenses that could be deducted from gross income as referred to Article 1 for 1 (one) year are restricted to not exceed 5% of previous Tax Year's fiscal net income.Program of Firm's Performance Rating in Environmental Management Since 1995, the Ministry of Environment has carried through Program of Firm's Performance Rating in Environmental Management (Program Penilaian Peringkat Kinerja Perusahaan dalam Pengelolaan Lingkungan/PROPER) as an effort to monitor environmental performance executed by companies.PROPER's criteria consist of two parts i.e. criteria of compliance rating and criteria of Beyond Compliance Rating (PERMENLH/ Minister of Environment Regulation No. 5 Year 2011).For compliance rating, aspects assessed are compliance to: 1) requirement of environment document and its reporting, 2) control of water pollution, 3) control of air pollution, 4) regulation of waste management, and 5) likelihood of land damage.Beyond compliance rating is more dynamic as adjustable to technology development, best practice of environment management application and global environment issues, consists of: 1) assessment of environment management system, 2) assessment of resource utilization and 3) assessment of society utilization.CSR and Tax AggressivenessAvi-Yonah (2008) claimed three firm's points of view to CSR affecting corporate tax policy.Such points of view are the artificial entity view, the real entity view and the aggregate view.The artificial entity view sees firm owing its country so that being involved in CSR is its mission and paying tax is one of the ways to fulfill its CSR obligation.The real entity view sees firm having rights and obligations as if society so that firm is suggested to exercise CSR.For tax payment, firm tends to obey the duty to pay and is not involved in overly aggressive tax management.The aggregate or nexus of contract view sees CSR as prohibited activities since it would direct managers to become irresponsible to their shareholders that have selected them.In this view, firm tries to maximize shareholders' profit by lessening corporate tax to the minimum level.Avi-Yonah (2008) declared that, given any views held by firms, they are not expected to have strategic tax behavior merely designed for tax reduction.It is because aggressive tax behavior would cause country to experience revenue slump that further influence the construction of public facilities.Avi-Yonah (2008) ascertained that firm's decision about the extent of tax reduction would be affected by firm's attitude towards CSR. in this study are:Model 1 (for hypothesis 1)CETR it = a + b 1 CSRD it + b 2 SIZE it + b 3 LEV it + b 4 ROA it + b 5 AGEPUB+ b 7 MTOBOD it + b 8 BHD it + b 9 INV it + b 10 MKTBK it + b 11 INDSEC+ e itModel 2 (for hypothesis 2) ETR it = a + b 1 CSRD it + b 2 KL it + b 3 CSRD*KL+ b 4 SIZE it + b 5 LEV it + b 6 ROA it + b 7 AGEPUB+ b 8 MTOBOD it + b 9 BHD it + b 10 INVINT it + b 11 MKTBK it + b 12 divided by total asset MKTBK : Market value of equity divided by book value of equity INDSEC : Dummy variable industrial sector. Table 1 . Technique of Sample Selection Table 4 . Regression Results for Hypothesis 2 CSR disclosure and environment performance, the lower the tax avoidance.This issue attests that good environmental performance strengthens the negative effect of CSR disclosure on tax aggressiveness.This evidence affirms that the company which high level disclosure of CSR and based on evaluation proven to has a good environment performance, has less tax aggressiveness behavior.Based on this result, we can recommend the government to conduct the evaluation on the company's environment and social activities performance with the purpose to verify the CSR report.
2019-01-25T04:20:08.349Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "962ad49a4c3a4faa6ca80b1d5fd92db08517bf57", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.21632/irjbs.9.2.93-104", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "962ad49a4c3a4faa6ca80b1d5fd92db08517bf57", "s2fieldsofstudy": [ "Environmental Science", "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
9848240
pes2o/s2orc
v3-fos-license
Efficacy of Cerebral Autoregulation in Early Ischemic Stroke Predicts Smaller Infarcts and Better Outcome Background and purpose Effective cerebral autoregulation (CA) may protect the vulnerable ischemic penumbra from blood pressure fluctuations and minimize neurological injury. We aimed to measure dynamic CA within 6 h of ischemic stroke (IS) symptoms onset and to evaluate the relationship between CA, stroke volume, and neurological outcome. Methods We enrolled 30 patients with acute middle cerebral artery IS. Within 6 h of IS, we measured for 10 min arterial blood pressure (Finometer), cerebral blood flow velocity (transcranial Doppler), and end-tidal-CO2. Transfer function analysis (coherence, phase, and gain) assessed dynamic CA, and receiver-operating curves calculated relevant cut-off values. National Institute of Health Stroke Scale was measured at baseline. Computed tomography at 24 h evaluated infarct volume. Modified Rankin Scale (MRS) at 3 months evaluated the outcome. Results The odds of being independent at 3 months (MRS 0–2) was 14-fold higher when 6 h CA was intact (Phase > 37°) (adjusted OR = 14.0 (IC 95% 1.7–74.0), p = 0.013). Similarly, infarct volume was significantly smaller with intact CA [median (range) 1.1 (0.2–7.0) vs 13.1 (1.3–110.5) ml, p = 0.002]. Conclusion In this pilot study, early effective CA was associated with better neurological outcome in patients with IS. Dynamic CA may carry significant prognostic implications. blood flow can adapt to pressure changes and/or demand, i.e., cerebral autoregulation (CA) (4). Dynamic CA (dCA) can be assessed using transfer function analysis (TFA) between spontaneous oscillations of ABP and cerebral blood flow velocity (CBFV) (4). CA has been studied in acute stroke (5) with conflicting results (5)(6)(7)(8), but the early hours, where penumbra is more vulnerable, has been largely ignored. Therefore, we aimed to assess dCA within 6 h of IS symptoms and its relationship with final infarct volume and 90-day functional outcome. MaTerials anD MeThODs study Population São João Hospital center ethical committee approved the study. Written informed consent was obtained. We included consecutive patients with middle cerebral artery (MCA) territory acute IS, admitted to our stroke unit. Ultrasound studies (Vivid e; GE) excluded hemodynamically significant extra-or intracranial stenoses. Patients with MCA proximal occlusion were excluded, as it prevented monitoring. Outcomes and statistics Baseline National Institutes of Health Stroke Scale (NIHSS) scores were calculated. Independence, modified Rankin Scale (0-2), at 90 days determined the outcome by a stroke physician blinded for the initial assessment. Head CT (Siemens Somaton/ Emotion Duo, Germany) at 24 h measured infarct volume with ABC/2 formula. Shapiro-Wilk test determined normality. Mann-Whitney and χ 2 tests compared hemodynamic measurements between subgroups. ROC analysis found relevant cutoff values. After dichotomization, multivariate logistic regression calculated the odds ratio. Relationship between continuous variables was determined by Spearman's correlation and adjusted with multivariate linear regression models. Level of significance was p < 0.05. resUlTs We recruited 30 patients characterized in Table 1. The relationship between dCA and outcome is presented in Table 2. Independence at 3 months was associated with higher phase (p = 0.024) and lower gain (p = 0.045) in the stroke hemisphere within 6 h of onset. ROC curve analysis found best cutoffs, associated with independency, in phase at 37° (affected side, AUC = 0.713, p = 0.028; sensitivity 70%, specificity 79%) but gain underperformed (AUC = 0.654, p = 0.112). Based on these cutoffs, independency at 3 months ( Figure 1A analyzed as a continuous variable, it correlated with stroke volume in the affected side (r = −0.444, p = 0.020) but not contralateral (r = −0.125, p = 0.409). In multivariate linear regression, only NIHSS significantly predicted infarct volume at 24 h (p = 0.002) but not phase (p = 0.457). Baseline systolic ABP was inversely correlated with infarct volume at 24 h (r = −0.665, p = 0.008) but only in the subgroup with lower phase in the infarct side ( Figure 1D). DiscUssiOn We showed that the efficacy of dCA during the first 6 h after symptom onset is associated with smaller infarct volumes at 24 h and better neurological outcome at 3 months. Transfer function analysis of the spontaneous ABP and CBFV oscillations is increasingly used to assess dCA in a number of neurovascular disorders (7)(8)(9)(10). The phase of this relationship, which represents the time delay between these oscillating waveforms, has emerged as a significant predictor of outcome. Lower phase shift (ineffective CA) has been linked to carotids or MCA stenosis (11) or development of vasospasm after subarachnoid hemorrhage (10). In patients with IS, phase has also been linked to stroke severity (5,7). The impaired CA can be also related to patient medical conditions not addressed in this study. For example, impaired cerebral autoregulation in patients with sleep apnea has been linked to an increased risk of stroke (12). Our findings, which build on these prior studies, show that effective dCA, as demonstrated by higher phase shift, is linked to smaller stroke volumes and better neurological outcome. Moreover, consistent with prior work where a phase >30 represents effective or intact autoregulation (4,5,9), we also found a cutoff value of 37° for phase that was predictive of neurological independence at 3 months and smaller stroke volumes at 24 h. Interestingly, we also found that a lower systolic ABP is associated with larger infarcts but only if CA is impaired in the infarct side (phase <37°). This observation enhances the biological plausibility of the link between phase (dCA), stroke volume and clinical outcome, since lower ABP would only endanger the ischemic penumbra with further hypoperfusion if CA was impaired. Taken together, CA assessment could, therefore, identify patients who would benefit from BP augmentation in future clinical trials (13) Perfusion imaging, instead of CA assessment, may have been more helpful to explain larger infarcts at 24 h by estimation of initial penumbra area. However, an impaired CA at baseline could itself be responsible for this larger penumbra. The question remains to be answered in future studies with correlative measurements with perfusion scanning. In line with prior studies (5), gain seems not to be a good marker for stroke outcome. Nevertheless, lower gain values (more effective CA) on the stroke side seemed to be associated with independence at 3 months. This study has some limitations. As it is a pilot study, we enrolled a small number of subjects. Regarding the TCD method, there are limitations inherent to CA assessment with TCD (4), as some non-stationary conditions (e.g., agitation, mental changes) might turn linear methods like TFA less reliable. Also, M1 occlusions could not be assessed. As CA was assessed after IV thrombolysis within 6 h of symptoms, non-occluded M1 cases in this study include recanalyzed MCA or branch occlusions while those who were excluded due to M1 occlusion are mostly non-recanalized MCA. Having said that, we still can see this as a limitation but occluded M1 after IV thrombolysis is itself a maker for very bad prognosis and we believed that CA assessment would not add any significant contribution in this scenario; we also monitored this excluded cases and only 1/16 (6%) was independent at 3 months and all had total MCA area involvement. So, what our study points out is that even if we recanalyze the MCA artery <6h, those with better CA (phase ≥37°) will have higher chance of being independent at 3 months. Concerning the infarct volume, we used CT scan, which is not as reliable as MRI. However, most of the stroke patients had easily identifiable partial or total areas of MCA infarct. Although CT scan is a coarse measure, we believe that the overall results were not influenced by this method. 1. In summary, we showed that the efficacy of dCA in the early hours of IS is linked to infarct volume at 24 h and neurological outcome at 3 months. Rapid bedside assessment of CA may help to identify a high risk population with impaired CA who would benefit from different BP management. eThics sTaTeMenT This study was carried out in accordance with the recommendations of São João Hospital center ethical committee with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the São João Hospital center ethical committee. aUThOr cOnTriBUTiOns PC reviewed the literature, designed the study, extracted the data, analyzed the results, and wrote the paper. EA designed the study, analyzed the results, and co-wrote the paper. IR and JS designed the study, analyzed the results, and reviewed the paper. FS reviewed the literature, designed the study, analyzed the results, and co-wrote the paper.
2017-05-04T13:21:20.278Z
2017-03-24T00:00:00.000
{ "year": 2017, "sha1": "704d881a9828167b67e26d9564581974fd68e909", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2017.00113/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "704d881a9828167b67e26d9564581974fd68e909", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232095617
pes2o/s2orc
v3-fos-license
The genetic background and vitamin D supplementation can affect irisin levels in Prader–Willi syndrome Background Prader–Willi syndrome (PWS) is associated to distinctive clinical symptoms, including obesity, cognitive and behavioral disorders, and bone impairment. Irisin is a myokine that acts on several target organs including brain adipose tissue and bone. The present study was finalized to explore circulating levels of irisin in children and adult PWS patients. Methods Seventy-eight subjects with PWS, 26 children (15 females, mean age 9.48 ± 3.6 years) and 52 adults (30 females, mean age 30.6 ± 10.7) were enrolled. Irisin serum levels were measured in patients and controls. Its levels were related with anthropometric and metabolic parameters, cognitive performance and bone mineral density either in pediatric or adult PWS. Multiple regression analysis was also performed. Results Irisin serum levels in PWS patients did not show different compared with controls. A more in-depth analysis showed that both pediatric and adult PWS with DEL15 displayed significantly reduced irisin levels compared to controls. Otherwise, no differences in irisin concentration were found in UPD15 patients with respect to controls. Our study revealed that in pediatric PWS the 25(OH) vitamin-D levels affected irisin serum concentration. Indeed, patients who were not supplemented with vitamin D showed lower irisin levels than controls and patients performing the supplementation. Multiple regression analysis showed that irisin levels in pediatric and adult PWS were predicted by the genetic background and 25(OH)-vitamin D levels, whereas in a group of 29 adult PWS also by intelligent quotient. Conclusion We demonstrated the possible role of genetic background and vitamin-D supplementation on irisin serum levels in PWS patients. Introduction Prader-Willi syndrome (PWS) is a rare genetic disease with distinctive clinical symptoms that critically impair patients' quality of life. PWS arises because of the lack of expression of genes located on the paternal chromosome 15q11.2-q13. Three main genetic mechanisms have been recognized in determining PWS: interstitial deletion of the proximal long arm of chromosome 15 (del15q11-q13) (DEL15), maternal uniparental disomy of chromosome 15 (UPD15), and imprinting defects [1]. The main features of the PWS phenotype are broader, and include neonatal hypotonia, poor feeding and initial failure to thrive, followed by hyperphagia and early childhood-onset obesity (if uncontrolled), multiple endocrine abnormalities (including growth hormone deficiency (GHD) and hypogonadism], motor development problems, dysmorphic features, cognitive impairment, and behavioral issues [2]). Notably, PWS patients also show bone impairment. In detail, prepubertal PWS children display normal bone mineral density (BMD) (if adjusted for the reduced height) [3][4][5], but in adolescence and adulthood, they presented decreased total BMD and bone mineral content (BMC) possible, because they did not achieve bone mineral accrual, also due to pubertal delay/hypogonadism [6][7][8][9]. Consequently, osteoporosis is predominant in PWS individuals, who also have other orthopedic complications related or worsened by weight gain, including scoliosis, kyphosis, hip dysplasia, flat feet, genu valgum and fractures [8,10]. Recently, researchers have shown an increased interest on irisin, a myokine primarily secreted by skeletal muscle, involved in bone, adipose tissue and brain homeostasis. In detail, in young mice irisin injection mimicked the effects of exercise by increasing cortical bone mass and strength [11]. In hindlimb unloaded mice, intermittent administration of irisin prevented bone loss [12]. Interestingly, the myokine is also known to determine the browning of white adipose tissue [13] and to work as an adipokine as it is secreted by the same tissue [14]. Additionally, it has also been reported that irisin may have an effect on certain brain functions, and consequently it is involved in cognitive impairment and in neurodegenerative disease [15,16]. These issues prompted some authors to evaluate irisin levels in adult PWS patients. In detail, Hirsch et al. found increased amounts of salivary irisin in obese PWS with respect to non-obese controls, whereas the plasma levels of irisin did not change significantly between the two groups [17,18]. Recently, Mai et al. also reported that PWS patients and controls have similar circulating irisin levels [19]. The present study was finalized to explore circulating levels of irisin in children and adult PWS patients in relation to the genetic background, metabolic profile, cognitive impairment and bone status. Patients Seventy-eight subjects with PWS, 26 children (15 females, mean age 9.48 ± 3.6 years) and 52 adults (30 females, mean age 30.6 ± 10.7) were included in this study. All patients showed the typical PWS clinical phenotype [20]. Genetic investigation was performed in all PWS patients, and 48 of them had DEL15 (32 adults and 16 children), while UPD15 was found in the remaining individuals (20 adults and 10 children). All PWS children were on growth hormone (GH) treatment from at least 12 months, at a dosage ranging from 0.025 to 0.035 mg/kg/day. Among PWS adults, 6 out of the 52 subjects presented a severe degree of GHD, according to a GH response to GHRH plus arginine less than 4.2 ng/ml [21], and received GH therapy at a mean dose of 0.23 mg/day. At all ages, the GH dose was adjusted to maintain serum total IGF-I within 2 SD from an agematched reference value to avoid overdosing. At the time of the study, 10 females and 1 male underwent sex steroid replacement treatment. As controls, we evaluated a group of 26 children (17 females, mean age 9.4 ± 3.29 years), referred to our hospital for minor surgery or electrocardiographic screening, and 54 normal weight adults (26 females, mean age 36.5 ± 12.5 years) enrolled on a voluntary basis. PWS and control children performed an average of 2 h per week of school sports, whereas adult PWS and controls about 3 h per week. Four out of 26 PWS children (15%) and 26 out of 52 PWS adults (50%) were on vitamin D supplementation at the moment of the study [cholecalciferol mean dosage: children 500 UI/daily (12.5 mcg/daily); adults 800 UI/daily (20 mcg/daily)]. Exclusion criteria from the study for both patients and controls were the use of mineral and vitamin supplements, except for vitamin D, the presence of chronic diseases with a possible impact on bone metabolism (e.g., hypothyroidism or hyperthyroidism, Cushing's syndrome, celiac disease, anorexia nervosa, etc.), the use of medications affecting bone turnover, e.g., corticosteroids, and fractures in 6 months preceding the study. Five adult PWS patients had a history of previous post-traumatic fractures at different sites (ankle, ulna, radio, malleolus, fibula, and phalanges). None of the PWS pediatric patients experienced fractures. No patient had previously undergone bariatric surgery. Written informed consent was obtained from all the legal guardians, and from the patients when applicable, prior to inclusion. All procedures were approved by local institutional review boards. Anthropometric measurements All patients underwent a general clinical examination, anthropometric measurements (height in cm, weight in kg) and, for the pediatric age, data were plotted on Italian growth charts and computed as percentiles and SDS [22]. BMI was defined as weight in kilograms divided by the square of height in meters. The international standards for sex-and age-specific BMI percentiles were used for subjects aged 2-18 years [22]. BMI standard deviation score (SDS) was derived from the published Center for Disease Control and Prevention (CDC) standards [23]. The BMI cut-off point of > 2 SDS was used to define obesity, and between 1.4 and 2 SDS to define overweight for individuals < 18 years of age. Considering adult age, we considered as obese, overweight and normal-weight those subjects with a BMI > 30, in the range of 25-30 and < 25, respectively (NIH). The pubertal and genital stages were assessed according to the Tanner criteria [24]. Biochemical measurements Blood samples were drawn under fasting conditions, centrifuged, and stored at − 80 °C until required. Blood glucose, insulin, total cholesterol (TC), high (HDL) and low (LDL) density lipoprotein cholesterol, triglycerides (TG), were measured after overnight fasting in all subjects, using standard methods. Values of TC, LDL, HDL, and TG were considered in the normal range if within the 5th and the 95th percentile. Calcium, phosphorus and alkaline phosphatase (ALP) concentrations were measured by the nephelometric method. Serum active intact parathyroid hormone (PTH) and 25(OH) vitamin D were measured by immunological tests based on the principle of chemiluminescence using commercial kits (Liaison assay; DiaSorin, Stillwater, Minnesota, USA). Osteocalcin serum concentration was measured by enzyme immunoassay (IBL International GmbH, Hamburg, Germany). Irisin levels were assessed using a commercially available kit (AdipoGen, Liestal, Switzerland). Insulin resistance was assessed calculating the homeostasis model assessment (HOMA) [25]. Total body scans were obtained to estimate fat mass (FM%), fat free mass (FFM%) expressed as percentage of total body weight (bone mass with the skull excluded from analysis (total body less head, TBLH). Bone variables included BMD (TBLH BMD, g/cm 2 ), BMC (TBLH BMC, g); BMD were normalized for height (TBLH BMD-Ht, g/ cm 3 ) to avoid any influence of growth on bone mass [28]. IQ assessment Global IQ evaluation was assessed in a subgroup of patients based on age. In details, in pediatric PWS it was used Wechsler Intelligence Scale for Children-IV (WISC-IV, n = 6) [29] or Leiter International Performance Scale (n = 5) [30]. The Wechsler Adult Intelligence Scale-IV (WAIS-IV) was used in 29 PWS adults. This test allowed the calculation of the total intelligence quotient (IQ) through the standardized administration of scales, including verbal subtests to determine the verbal quotient (VQ), and performance subtests to determine performance quotient (PQ) [31]. Statistical analyses Results are shown as median with interquartiles. The Kolmogorov-Smirnov test was utilized to assess the normality of parameter distribution. Mean values were compared by the unpaired Student t-test in parameters with normal distribution, and linear correlations evaluated with Pearson's correlation coefficient. Significance was calculated with the Mann-Whitney test and Spearman's correlation coefficient in parameters with skewed distribution. Multiple regression analyses were applied to identify the relative strength of each biochemical and clinical variable in predicting irisin levels. The Statistical Package for the Social Sciences (SPSS) for Windows, version 22.0 (SPSS Inc., Chicago, IL, USA) was utilized for statistical analysis. The limit of statistical significance was set at 0.05. Linking irisin levels to the genetic background and metabolic profile and bone health in pediatric and adult PWS Clinical and biochemical characteristics of PWS patients are reported in Table 1. Mean serum irisin levels did not change significantly between PWS children and controls (4.37 ± 2.30 μg/ml vs 5.31 ± 2.13 μg/ml, respectively) as well as between adult PWS and controls (6.65 ± 4.49 μg/ ml vs 7.24 ± 5.20 μg/ml, respectively) ( Fig. 1). A more indepth analysis showed that the type of genetic alteration in PWS patients affected irisin levels. In fact, both pediatric and adult PWS with DEL15 showed significantly reduced irisin levels compared with the controls (p < 0.02 and p < 0.04, respectively) ( Fig. 2a, b). Otherwise, pediatric and adult PWS with UPD15 did not display significant differences compared with the controls (Fig. 2a, b). These findings prompted us to evaluate if there were significant differences among clinical, biochemical and bone parameters according to the genetic background. We found that in the pediatric population only total cholesterol and LDL-C statistically differs between DEL15 and UPD15 (Table 2). About bone densitometric parameters, although there was a trend toward the reduction in UPD15 patients compared with DEL15 patients, this did not reach the statistical significance (Table 2). Interestingly, higher level of 25(OH) vitamin D were measured in UPD15 patients compared with DEL15 patients, but the difference did not achieve the statistical significance (Table 2). Furthermore, we found that adult PWS patients performing vitamin D supplementation had irisin levels similar with controls, whereas adult PWS patients without vitamin D supplementation showed a significant reduction of the myokine levels compared with the controls and the patients performing the supplementation (p < 0.001 and p < 0.02, respectively), Fig. 2c. Consistently, adult PWS patients performing vitamin D supplementation showed significantly higher levels of 25(OH) vitamin D compared with PWS patients not performing the supplementation (33.39 ± 9.87 vs 27.03 ± 6.04, p < 0.05). This issue was not investigated in pediatric PWS as only four performed vitamin D supplementation. Finally, if we consider together the vitamin D supplementation and the genetics of adult PWS patients, the lowest levels of irisin are associated to the lacking of vitamin D supplementation for both DEL15 and UPD15 groups (p < 0.004 and p < 0.001, respectively) (Fig. 2d) Table 3 shows the correlations between irisin levels and anthropometric, metabolic parameters, cognitive performance and instrumental parameters of bone health in our study population. In pediatric PWS subjects the irisin levels positively correlated with BMI-SDS, weight-SDS, height-SDS, FM, FM%, glucose, insulin, HOMA-IR, vitamin D dosage supplementation, and 25(OH)-vitamin D levels; otherwise, the irisin levels negatively correlated with HDL, FFM%, calcium, and PTH. In PWS adults the irisin levels correlated positively with cholesterol, HDL, FM, FM%, years of GH therapy, glucose, age of sex steroid replacement therapy, age at start of GH therapy, LS-Tscore, TBLH BMD-Ht, IQ, verbal IQ and performance IQ. Otherwise, a negative correlation was found between irisin levels and 25(OH)-vitamin D levels, FFM%, calcium, and PTH. Correlations and multiple regression analysis among irisin levels and anthropometric and metabolic parameters as well as cognitive performance and bone mineral density Additionally, multiple linear regression analyses were performed to explore the factors affecting irisin levels in PWS patients. Multiple linear regression analysis for irisin as dependent variable demonstrated that weight-SDS, genetics, 25(OH)-vitamin D levels and LS BMD Z score were the most important predictors in pediatric PWS subjects (Table 4). With adjustment for age, in adult PWS the best predictors for irisin levels were the genetic background, 25(OH)-vitamin D levels, GH therapy, the age at start and the duration of GH treatment, the age at start of sex steroid replacement therapy, IQ and TBLH BMD-Ht. Discussion The present study displayed that PWS patients has comparable levels of irisin with respect to the controls; interestingly, a deepened analysis showed that both pediatric and adult PWS with DEL15 have significantly reduced levels of irisin compared with the controls, suggesting that the genetic background could be associated with a different metabolic profile in PWS [32]. Additionally, we also showed that patients who did not receive vitamin D supplementation had low serum levels of irisin, despite having the UPD as genetic alterations. To our knowledge, this is the first study, which evaluated irisin levels both in PWS children and adults. Previous studies assessed irisin levels only in adult PWS patients. In detail, Hirsch et al. found higher levels of irisin in the saliva of PWS patients than controls, probably due to the different composition, and not significant differences in plasma levels between the two groups [17]. The same authors found that in PWS patients and controls plasma irisin levels positively correlated with total cholesterol and LDL, whereas in the saliva the myokine levels was inversely related with HDL, and directly with LDL and triglycerides [17]. Recently, the same research group again demonstrated that the serum levels of the myokine did not change in adult PWS compared with the controls, even if they performed a resistance exercise [18]. These results can be explained by the hypotonic muscle mass of these subjects. showed significantly reduced irisin levels compared with the controls (p < 0.02 and p < 0.04, respectively), but not UPD15. Adult PWS patients (c) without vitamin D supplementation showed a significant reduction of the myokine levels compared with the controls and the patients performing the supplementation (p < 0.001 and p < 0.02). The lowest levels of irisin are associated to the lacking of vitamin D supplementation for both DEL15 and UPD15 groups (p < 0.004 and p < 0.001) (d) Conversely, we found that irisin serum levels directly correlated with HDL in PWS children, whereas a positive correlation between irisin levels and total cholesterol, LDL, and HDL, but not with triglycerides was found in PWS adults. Recently, Mai et al. demonstrated that obese PWS adults showed comparable levels of irisin with respect to controls, but lower irisin amounts than obese subjects [19]. Interestingly, the authors also reported that in PWS patients irisin levels correlated with triglycerides [19]. This finding may be related to the peculiar body composition of PWS, characterized by lower visceral adipose tissue and decreased muscle mass [33], as well as to the impairment of adipose tissue observed in these subjects [34]. Consistently, we also found that irisin level correlated with FM and FFM. In agreement with our data, Mai et al. reported that a positive correlation was evident between irisin and FM% after adjustment for the PWS status [19]. On the other hand, our data showed that in the pediatric population the levels of the myokine correlated with BMI-SDS and weight-SDS, as well as parameters of glycemic and lipid metabolism. Our paper is the first to demonstrate a direct correlation between the levels of this myokine and LS-BMD Z score and LS-BMD T score in pediatric and adult PWS, respectively. The anabolic role of irisin on bone has been demonstrated in healthy and osteoporotic mice [11,12]. In humans, irisin correlates negatively with the serum levels of sclerostin, an inhibitor of Wnt β-catenin pathway [35]. Moreover, a direct correlation of the myokine with bone strength and BMD has been demonstrated in athletes [36] as well as in soccer players [37]. We also reported a positive association between bone status and serum irisin levels in healthy and diabetic children [38,39]. In our population of pediatric and adult PWS subjects the irisin levels were negatively related to PTH. Consistently, in vitro experiments demonstrated a negative relationship between PTH and irisin, and these findings were further supported by the reduced concentration of the myokine in post-menopausal women with primary hyperparathyroidism with respect to the controls [40]. Although it has been reported that irisin levels were associated with osteoporotic fractures [41], in our study population previous post-traumatic fractures have been described only in five adult PWS patients, and thus it was not possible to evaluate the statistic relevance. Interestingly, our study revealed that in pediatric PWS the vitamin D levels affected irisin serum concentration. This finding is a novelty in pediatric PWS population as previously we did not find significant correlation between irisin and 25(OH) vitamin D levels in healthy children [38]. The [42]. In adult PWS patients we demonstrated a direct link with the age of sex steroid replacement therapy and the age of GH therapy, suggesting the key role of the beginning of the therapy to normalize the levels of the myokine in these subjects. Literature data reported the strict connection between irisin levels and GH as well as the favorable effect of GH replacement therapy on the myokine levels in children with GH deficiency [43]. Interestingly, irisin has been linked to cognitive impairment and neurodegenerative diseases [44], and in adults at risk of dementia its levels correlated with global cognition [45], thus irisin also could represent a serum biomarker of cognitive impairment. Although the results are referred only to 29 adult PWS, we showed that the levels of the myokine positively correlated with total IQ, verbal IQ and performance IQ. However, we did not find the same results for pediatric PWS, thus it is possible that it was evident only in adults, because the IQ impairment was more serious, but also maybe because we had total IQ evaluation only for a restricted number of pediatric subjects. Consequently, this issue will require future investigations with larger study cohorts. In conclusion, we did not find different irisin levels in PWS patients compared to matched controls, but we demonstrate possible role of genetic background in PWS on irisin level. Vitamin D supplementation may be key factor in regulating serum irisin levels. Research involving human participants and/or animals All procedures were approved by local institutional review boards. Informed consent Written informed consent was obtained from all the legal guardians, and from the patients when applicable, prior to inclusion. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2021-03-03T14:52:45.915Z
2021-03-03T00:00:00.000
{ "year": 2021, "sha1": "520abaa87dda07c6b668dbb1f34da4f1fe28b357", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40618-021-01533-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "520abaa87dda07c6b668dbb1f34da4f1fe28b357", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15920193
pes2o/s2orc
v3-fos-license
Interaction of Nonlinear Schr\"odinger Solitons with an External Potential Employing a particularly suitable higher order symplectic integration algorithm, we integrate the 1-$d$ nonlinear Schr\"odinger equation numerically for solitons moving in external potentials. In particular, we study the scattering off an interface separating two regions of constant potential. We find that the soliton can break up into two solitons, eventually accompanied by radiation of non-solitary waves. Reflection coefficients and inelasticities are computed as functions of the height of the potential step and of its steepness. Introduction Recent years have seen a considerable growth in the interest for nonlinear partial di erential equations with soliton solutions. In particular, the nonlinear Schr odinger equation (NLSE) and its variants appear in problems drawn from disciplines as diverse as optics, solid state, particle and plasma physics. There, the NLSE describes phenomena such as modulational instability of water waves 1], propagation of heat pulses in anharmonic crystals, helical motion of very thin vortex laments, nonlinear modulation of collisionless plasma waves 2], and self-trapping of light beams in optically nonlinear media 3,4,5]. In all these problems, the main interest is in the fact that the NLSE has soliton solutions. These are solitary waves with well de ned pulse-like shapes and remarkable stability properties 6]. A great deal of current interest is directed to the question how these states behave under the in uence of external perturbations. These can be of various forms. We shall limit ourselves to such perturbations which can be described by potentials. They preserve the hamiltonian structure of the NLSE 2], but not its complete integrability. Other types of perturbation which also are hamiltonian are obtained when either the coe cient in the kinetic term (the`mass' in the quantum mechanical interpretation) or in the nonlinear term are made spatially not constant. Such inhomogeneities have indeed been studied more intensely than the ones we shall study below, since they are more relevant for the transmission of pulses through junctions in optical bers 4,5,7,8,9]. More precisely, we shall consider only potentials which are constant outside a nite interval (we shall only consider the case of one spatial dimension). But we allow di erent values V for x ! 1, mimicking thereby the e ect of an interface between two media in which the solitons have di erent characteristics. We study initial conditions consisting of one single soliton. In general, we have to expect that this soliton will not just be transmitted or re ected. There might be also inelastic scatterings where it breaks up either into several solitons or into non-solitary waves, or both. This problem has been studied previously by several authors. While perturbative approaches were used in 3, 4, 5, 10], straightforward numerical integrations were made in 11]. Both approaches showed that the soliton behaves just like a classical particle if the force created by the potential is su ciently weak. This is to be expected, but the problem what happens when the force is strong was left open for a simple potential ramp (the potential considered in 5] was more complicated). It is one purpose of the present paper to close this gap by means of simulations. Another purpose is to show the usefulness of higher order symplectic integration algorithms. As we have already mentioned, the NLSE is a hamiltonian system. Thus, it is natural to apply to it integration routines which were developed during the recent years and whose main characteristic is that they preserve the hamiltonian structure 12,13,14]. The latter is not true e.g. for standard methods as e.g. Runge-Kutta or predictor-corrector. Such`symplectic' integrators (the simplest of which is the well known Verlet or`leap frog' algorithm) have been applied already to the linear 15,16,17,18] and nonlinear 19,20,21,22] Schr odinger equations. The most popular algorithms of this type are split-operator methods. They depend on the hamiltonian being a sum of two terms A and B, each of which can be integrated explicitly. Then one uses the Baker-Campbell-Hausdor theorem to approximate e i(A+B)t by a product of factors e i k At and e i k Bt , where k and k are real numbers satisfying among others P k k = P k k = 1. The error is then given by higher order commutators of A and B. We shall in particular apply a fourth order method due to McLachlan and Atela 23] which is applicable if one of the third order commutators vanished identically. We shall see that this method should be applicable to our problem, and that it is indeed numerically very precise, indicating that the McLachlan-Atela method is the method of choice for a wide class of problems. The NLSE soliton solution Using appropriate units, we can write the NLSE as i @ (x; t) @t = 1 2 @ 2 (x; t) @x 2 j (x; t)j 2 (x; t) + V (x) (x; t); (1) where V (x) is the external potential. We shall use for the latter a piecewise linear ansatz, with V (x) 0 for x < 0, V (x) V 0 > 0 for x > x 0 0, and linearly rising for x between 0 and x 0 , 0 : x < 0 xV 0 =x 0 : 0 x < x 0 V 0 : x x 0 : (2) We call the negative x-axis region I, while region II is the region x > 0 (where V (x) > 0). We study scattering solutions where the incoming wave consists of a single soliton arriving from region I. The outgoing wave will then in general be a complicated superposition of solitons and non-solitary waves, in general moving both into regions I and II. The interesting questions are how many solitons will leave the scattering region and with what energies, how much of the total energy is transmitted and re ected, and how much of it goes into non-solitary waves. For a constant potential V 0 the soliton solutions of eq.(1) form a two-parameter manifold (apart from translations). Taking as parameters the velocity v and the amplitude a, these solutions read 24] (x; t) = a cosh a(x vt)] e ifvx+ (a 2 v 2 )=2 V 0 ]tg : (3) We denote the velocity of the incoming soliton as v 0 . Using a suitable rescaling of x; t and , we can always choose its amplitude as a 0 = 1=2, without loss of generality. Among the in nitely many conserved quantities (for V (x) = const!) the following three are of particular interest: the normalization N = Z j j 2 dx ; (4) the energy E = Z 1 2 @ @x 2 1 2 j j 4 + V (x) j j 2 ! dx ; (5) and the momentum P = 1 2i Z @ @x @ @x dx : (6) For the soliton given by eq.(3), N = 2a, P = vN, and E = (v 2 =2 a 2 =6)N+hV iN, where the average over V (x) is taken with weight / j j 2 as indicated by eq.(5). For a slowly varying V (x) (which implies x 0 =V 0 1 in our case) the amplitude is approximately constant, and the soliton moves like a classical particle with mass m = 2a in an external potential mV (x) 10]. The mass of the incoming soliton is m 0 = 1 with our normalization. Another limit case where the soliton behaves like a particle is that of V 0 K 0 where K 0 = v 2 0 =2 is the kinetic energy of the incoming soliton. It is easily seen that N and E are also conserved for non-constant potential V , while this is not true for P. Denoting by N i ; i = I; II, the normalization in region i, we have thus N I;out + N II;out = N I;in = 1. Similarly, energy conservation gives E I;out + E II;out = E I;in = (v 2 0 1=12)=2. Conservation of N and E poses restrictions on the nal state. In general, they do not seem to be very stringent. Assume e.g. that the nal state consists of two solitons moving in opposite directions, (a; v) moving into region I and (b; w) moving into II. Then we nd that a + b = 1=2; v 2 0 = ab + 2(av 2 + bw 2 + 2bV 0 ) This does not imply, in particular, a lower bound on v 0 since b and v can be arbitrarily small. Similarly, for any initial soliton we can have any number of outgoing solitons, provided there is at least one re ected and one transmitted soliton. Conservation of N and E is more stringent if no or all solitons are re ected. For instance, if the nal state consists of a single transmitted soliton, then its velocity is v II;out = p v 2 0 2V 0 . This conforms with the general statement that the soliton behaves like a classical particle with m = 1, and shows that there is no transmission if v 0 < p 2V 0 (i.e., K 0 < V 0 ) and x 0 V 0 . It was veri ed numerically in 11]. These authors concluded indeed that solitons impinging on a potential step behave like classical particles. It was mainly this claim which stimulated our investigation. Symplectic integration The NLSE is a classical hamiltonian system with Poisson bracket f (x); (y)g = i (x y) (8) and hamiltonian H = E. This implies in particular that it can be written as _ = f ; Hg = H ; (9) where the linear (`Liouville') operator H is de ned as H = f ; Hg. Splitoperator methods can be applied by splitting H = T + V, where T and V are the Liouvilleans corresponding to 1 2 R dxj@ x j 2 and R dx( 1 2 j j 4 + V j j 2 ), T = i 2 @ 2 x ; V = i(j j 2 V ) : (10) In a paper by McLachlan & Atela 23], a fourth order algorithm was introduced which minimizes the neglected fth order terms in the Baker-Campbell-Hausdor formula for hamiltonians for which fffT ; Vg; Vg; Vg 0 : (11) This applies obviously to hamiltonians with T = 1 2 (p; M 1 p), V = V (x), with M a constant mass matrix and fq i ; p k g = ik , since there each commutator with V acts as a derivative operator on any function of p. In 17] it was shown that this algorithm can also be applied to the linear SE where it gives better performance than the general fourth order algorithm 12] which does not take into account this special structure. Although the argument is less straightforward in the present case, it is not too hard to see that eq.(11) holds also there 22]. Let f(j j 2 ; x) and g(j j 2 ; x) be arbitrary functions with nite rst and second derivatives. Then one nds R dxj@ x j 2 g(j j 2 ; x); R dyf(j j 2 ; y) = i R dx( xx xx )gf 0 ; (12) where f 0 = @f=@j 2 j, and dx df 0 (j (x)j 2 ; x) dx : (13) Since the last expression is a functional of j j 2 only, its Poisson bracket with R dyf vanishes identically, QED. The coe cients k and k for the McLachlan-Atela method are listed in 23,17]. Our implementation involves a spatial grid with Fourier transformation after each half step 17]. Since T and V both conserve the normalization exactly, N should be conserved up to round-o errors. This was checked numerically, relative errors typically were of order 10 11 . Energy is not conserved exactly, and its error was 10 5 after an evolution time t = 300 with an integration step t = 0:005. The precise value depended of course on the parameters of the soliton and on x 0 . It was checked that the algorithm is indeed fourth order, and is more precise than the general fourth order symplectic 12] and the leap-frog (second order symplectic) algorithms. We also tested two other discrete Hamiltonian integration schemes which where examined in 26]. They both show the same qualitative behavior, but the discretization of the Laplace operator requires smaller time steps for the same spatial discretization width. All this demonstrates the advantage of the McLachlan-Atela algorithm. Results During the simulations we measured normalization N i , energy E i , and momentum P i in each region (i = I; II) separately. The derivatives of and were of course computed in Fourier space, as this is much more precise than taking nite di erences in x-space. Since we have two conserved quantities, we can de ne two sets of transmission and re ection coe cients. We call them T N ; R N and T E ; R E , T N = N II N ; R N = N I N = 1 T N (14) and T E = E II E ; R E = E I E = 1 T E : (15) In addition we registered all local maxima of j (x)j 2 with j (x)j 2 > 1=3000. Since our model involves 3 free parameters (V 0 ; x 0 ; v 0 ), it is impossible to present results exhaustively. We did a large number of simulations with di erent parameter values, but we present only a few of them here to illustrate the variety of the scenarios. Our numerical simulations con rmed the prediction that the soliton behaves as a classical particle if x 0 =V 0 1, and if V 0 K 0 . The same is true also if x 0 = 0 and V 0 = 1, i.e. if the potential acts like a hard wall. In that case, an exact solution of the NLSE with boundary condition j x=0 = 0 and correct initial conditions in region I is provided by a state with two (interacting) solitons with opposite velocities and phases but equal amplitudes 11]. While the above essentially just checked the correctness of our integration routine, a less trivial result is that we con rmed the observations of Nogami and Toyama 11] for their parameter choice x 0 = 0; v 0 = 0:2; V 0 K 0 . But we did not verify their claim that this is the typical behavior. Instead, the soliton typically breaks up and does not behave like a classical particle. In general, after the soliton hits the potential ramp, we found typically more than a single maximum of j (x)j. Moreover, the heights of these maxima in general were not constant in time, though they moved with practically constant velocities (see gs. 1, 3, 5, 7). Instead, they showed often very marked oscillations ( g. 2,4,6,8) which were damped in all cases. Such damped oscillations result typically from superpositions of solitons with non-solitary waves 27]. We checked that a superposition of a soliton with a Gaussian wave packet gave essentially the same patterns. In the following we shall only show results for v 0 = 0:8 although, as we said, we had made runs also with di erent v 0 and with similar results in general. Figures 1 and 2 show the case where the potential is a step function (x 0 = 0) and the kinetic energy (K 0 = 0:32) is larger than its height V 0 = 0:3. Classically one would expect the soliton to move into region I and to propagate there with a reduced speed. But our simulation shows that it breaks up into two solitons with roughly equal heights and with velocities v = 0:588 and w = 0:395. About half of the normalization and three thirds of the energy are transmitted (T N = 0:527; T E = 0:712). Inserting these numbers into eq.(7), we nd perfect agreement (discrepancies are < 1%). This indicates that radiation in form of nonsolitary waves is small in spite of the wiggles seen in g. 2. More precisely, we compared our data with ref. 27] by assuming that the transmitted wave is a single solitary wave immediately after leaving the interaction region. We found perfect agreement if we assume that this wave has exactly the same shape and width as the incoming soliton, but an amplitude reduced by a factor 0.728. Thus, at least for these parameter values, the main e ect of the interaction on the transmitted wave is simply a reduction of amplitude. The situation where the potential step (V 0 = 0:34; x 0 = 0) is higher than the kinetic energy K 0 is plotted in the gures 3 and 4. Here one would expect classically that the incident soliton is completely re ected back into region II. But once again the behavior is quite di erent, the soliton splits up into two. The transmitted one is not as high as the re ected one (T N = 0:373) and therefore much wider, but it still carries more than half of the initial energy, T E = 0:571. As we increase V 0 further, the transmitted soliton rapidly shrinks. It becomes unobservable at V 0 2K 0 , where the soliton is practically completely re ected. Let us now study positive values of x 0 , i.e. potential ramps with nite slopes. Our data show unambiguously that this slope has a strong in uence. If x 0 is of order 1, the soliton still breaks up as described above ( gs. 5, 6), with even larger oscillations and even more \dirt" than for x 0 = 0. Flattening the potential ramp further but leaving its height constant, the soliton nally travels along the classically expected trajectory ( gs. 7,8): in the ramp region it sees a constant force and hence moves on a parabola; it is re ected (transmitted) for V 0 > K 0 (V 0 < K 0 ). This dependence on the slope of the ramp is seen very clearly when plotting the energy in region II as a function of time, see g. 9. While the asymptotic state is reached very quickly for steep potentials, this evolution takes very long for gentle slopes. If x 0 1 (corresponding to a width of the soliton x 0 ), the energy change is sudden when the soliton crosses the point x = 0. Finally, the dependence of the transmission coe cients on x 0 are shown in g. 10. We see that they are not monotonic, with the nonmonotonicity more pronounced for T N than for T E . This is an unexpected e ect which we do not know how to explain. The fact that T N < T E for all x 0 is less surprising. Summary and conclusions In this note we have applied an optimized fourth order symplectic integrator to the scattering of NLSE solitons from an external potential. The integrator is optimized in the sense that it takes into account that the kinetic energy is bilinear in x . It was found to be more precise than the general fourth order symplectic integrator. We found that solitons break in general up when hitting a potential threshold, in contrast to recent claims. The complexity of the outgoing state depends on the parameters of the potential and of the soliton, but most frequently the soliton breaks into two, with rather little radiation. The NLSE can be considered as a special case of the complex Ginzburg-Landau (CGL) equation _ = + j j 2 + r 2 ( ; ; 2 C ) with complex constants. The applicability of our integrator does not depend on the phases of these terms, whence it should be applicable also to the CLG equation in general. We just have to take into account that j j is not constant dusssring the evolution under the nonlinear term if Re ; 6 = 0. In that case the integration of V involves solving the easy di erential equation dj j 2 =dt = 2(Re j j 2 + Re j j 4 ). This work was partly supported by DFG within the Graduiertenkolleg \Feldtheoretische und numerische Methoden in der Elementarteilchen-und Statistischen Physik", and within SFB 237.
2014-10-01T00:00:00.000Z
1995-06-30T00:00:00.000
{ "year": 1995, "sha1": "6062be682c2e049fad0fb40119f1b2517cec4696", "oa_license": null, "oa_url": "http://arxiv.org/pdf/chao-dyn/9506011", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aefb0f5b73af8264e725b7517a168e87054233fd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1951292
pes2o/s2orc
v3-fos-license
Review of the Inhibition of Biological Activities of Food-Related Selected Toxins by Natural Compounds There is a need to develop food-compatible conditions to alter the structures of fungal, bacterial, and plant toxins, thus transforming toxins to nontoxic molecules. The term ‘chemical genetics’ has been used to describe this approach. This overview attempts to survey and consolidate the widely scattered literature on the inhibition by natural compounds and plant extracts of the biological (toxicological) activity of the following food-related toxins: aflatoxin B1, fumonisins, and ochratoxin A produced by fungi; cholera toxin produced by Vibrio cholerae bacteria; Shiga toxins produced by E. coli bacteria; staphylococcal enterotoxins produced by Staphylococcus aureus bacteria; ricin produced by seeds of the castor plant Ricinus communis; and the glycoalkaloid α-chaconine synthesized in potato tubers and leaves. The reduction of biological activity has been achieved by one or more of the following approaches: inhibition of the release of the toxin into the environment, especially food; an alteration of the structural integrity of the toxin molecules; changes in the optimum microenvironment, especially pH, for toxin activity; and protection against adverse effects of the toxins in cells, animals, and humans (chemoprevention). The results show that food-compatible and safe compounds with anti-toxin properties can be used to reduce the toxic potential of these toxins. Practical applications and research needs are suggested that may further facilitate reducing the toxic burden of the diet. Researchers are challenged to (a) apply the available methods without adversely affecting the nutritional quality, safety, and sensory attributes of animal feed and human food and (b) educate food producers and processors and the public about available approaches to mitigating the undesirable effects of natural toxins that may present in the diet. Introduction Numerous foodborne diseases result from ingesting foods that are contaminated with microbial and plant toxins. Naturally occurring food toxicants can adversely affect the nutritional quality and safety of foods. Because of a growing concern about relationships between diet and diseases and because of a growing need to improve the quality and safety of our food supply, research is needed to define conditions that minimize the levels of toxic compounds in foods. Thus, in order to improve food safety, there is a need for technologies to inactivate or inhibit toxins with food-compatible natural compounds and plant extracts. Most natural food toxicants possess specific sites that are responsible for their adverse effects in animals and humans. Therefore, modifying such sites with site-specific reagents that will change the structural integrity and thus prevent the toxins from interacting with cell receptor sites in vivo may make it possible to decrease their toxic potential. In this review, we will present a brief overview of published studies on some possible approaches to reducing deleterious effects of the following toxins produced by fungi (aflatoxin B1, fumonisins, and ochratoxin A), bacteria (cholera toxin, botulinum neurotoxin, Shiga toxins, and Staphylococcus enterotoxin), and plants (ricin and α-chaconine). Thiol Adducts Aflatoxin B1 (AFB1) is a pre-carcinogen that is transformed in vivo to an active epoxide [1]. Prior treatment with site-specific reagents should modify the molecule in a manner that will prevent formation of the epoxide and inhibit its mutagenic and carcinogenic activity. Because thiols are potent nucleophiles [2], they may competitively inhibit the interaction of the epoxide with DNA. Our HPLC studies showed that exposure of AFB1 to N-acetyl-L-cysteine (NAC) resulted in the disappearance of the AFB1 peak and the appearance of a new peak, presumably the thiol adduct ( Figure 1) [3,4]. The integrated absorbance of this peak indicated that AFB1 was converted nearly quantitatively to this single derivative. In additional experiments we found SH-containing compounds, including NAC, reduced glutathione (GSH), and N-2-mercaptopropionylglycine inactivated the mutagenic activity of AFB1 in the Ames Salmonella Typhimurium test. Surprisingly, L-cysteine was less effective. Figure 2 shows three postulated pathways for possible aflatoxin-thiol interactions. Pathway A shows the nucleophilic addition of a thiol to the 2,3-double bond of AFB1 to form an inactive thiol adduct. Pathway B depicts the interaction of a thiol with the 2,3-epoxide, which may prevent the epoxide from interacting with DNA. Pathway C shows the displacement of the AFB1-DNA (guanine) adduct, which thus prevents tumorigenesis. Figure 1. HPLC of AFB1 and AFB1-N-acetylcysteine (NAC) adduct. Adapted from [3,4]. Figure 2. Possible pathways for the inhibition of AFB1 mutagenicity/carcinogenicity of AFB1 by thiols such as cysteine, N-acetylcysteine, and reduced glutathione. See text. Adapted from [3,4]. Related in vitro and in vivo studies with sulfur amino acids are described by De Flora et al. [5,6], Shetty et al. [7], Guengerich et al. [8] and reviewed by Madrigal-Santillán et al. [9] and Valencia-Quintana et al. [10]. Cavalcante et al. [11] found that apple juice and cashews also exhibited anti-mutagenicity in the Ames test. These observations suggest that thiols may be useful for inactivating AFB1 in contaminated foods, as an antidote to treat AFB1 toxicity and for prophylaxis to prevent AFB1 poisoning. Thiol-adduct formation also reduced the very high mutagenicity of the tetrachloroimide mutagen formed in poultry chiller water [12] and inhibited the heat-induced formation in plant foods of the presumptive carcinogen and teratogen acrylamide [13,14] as well as the antinutritional compound lysinoalanine in food exposed to heat and high pH [15]. Lysine Adducts Several studies have reported on the formation of adducts between hydrolysis and oxidation products of AFB1 and free and protein-bound lysine residues. These include: (a) the observation that a dialdehyde derived from the exo-8,9-epoxide part of AFB1 reacted with both lysine and albumin to form lysine adducts [16]; (b) the finding that human AFB1 albumin adducts determined by three independent methods can be used to assess human exposure to this carcinogen [17]; and (c) a detailed description of the toxicokinetics of the serum AFB1-lysine adduct in rats [18]. These authors suggest that this biomarker has the potential to be used to relate exposure to AFB1 to human health effects. Chemoprevention of AFB1-Induced Carcinogenesis in Cells Several studies reported on the inhibition of AFB1-induced apoptosis (cell death) of cancer cells. These include the following observations: • Rosmarinic acid, a phenolic antioxidant contained in basil, mint, and sage, prevented AFB1-induced carcinogenesis of human hepatoma HepG2 cells [19]. • Carnosic acid, a phenolic antioxidant present in the rosemary plant, exhibited a dose-dependent protective effect against apoptosis of HepG2 cells [21]. • Leontopodic acid, isolated from the aerial parts of the Leontopodium alpinum plant, showed chemopreventive effects against AFB1-and deoxynivalenol-induced cell damage [22]. The cited beneficial effects seem to be associated with antioxidative and/or free radical scavenging properties of the evaluated compounds. Inhibition of Aflatoxicosis Several studies describe the inhibition of aflatoxin toxicity by food compounds in different animal species. These include the following observations: • The amino acid cysteine and methionine and yeast inhibited aflatoxicosis in rats [9,23]. • Garlic powder protected against AFB1-induced DNA damage in rat liver and colon [25]. • The polysaccharide mannan and yeast reduced AFB1-and ochratoxin-induced DNA damage in rats [9]. Reduction of AFB1 in Food A detailed discussion of the chemical inactivation of AFB1 in different foods is beyond the scope of this review. Reported studies include the following observations: • Treatment with aqueous citric acid degraded 96.7% of AFB1 in maize (corn) with an initial concentration of 93 ng/g [29]. • Citric acid was more effective than lactic acid in reducing AFB1 in extrusion cooked sorghum [30]. • Extrusion cooking of contaminated peanut meal in the presence of calcium chloride, lysine, and methylamine reduced AFB1 from an initial value of 417.7 µg/kg to 66.9 µg/kg [31]. • Intermittent pumping of the volatile soybean aldehyde trans-2-exanal protected stored corn from Aspergillus flavus growth and aflatoxin contamination [34]. • The highest aflatoxin reduction (24.8%) was observed after cooking contaminated rice samples in a rice cooker, but the difference with other home-cooking methods was not statistically significant [35]. Practical Applications The need to reduce the aflatoxin content of the diet is strikingly demonstrated by the observed significant reduction in the incidence of human liver cancer, especially in age groups >25 years, associated with reduced content of dietary aflatoxin [36]. The authors ascribe this beneficial effect to a shift of food consumption from moldy corn to fresh rice and improved economic status. To control fungal growth and aflatoxin and fumonisin production, drying of corn should take place soon after harvest [37]. Treatment with citric acid seems to be an effective and inexpensive method to reduce the aflatoxin content by 97%. It is not known whether the dietary ingredients mentioned above would protect humans against aflatoxicosis and liver cancer. In view, however, of the observed protection against aflatoxin-induced liver damage in albino male mice by co-administration with a black tea extract (2% infusion in water) [38], black tea may also protect humans. These observations merit additional comment. Based on the recent in vitro observations by Rasooly et al. [39] that low levels of AFB1 stimulate growth of Vero kidney cells and high levels kill the cells, it is likely that the low residual AFB1 levels in food mentioned above would exert different and unknown biological effects in vivo. Further study is needed to investigate these effects in more detail. Fumonisins Carcinogenic [40,41] and neurotoxic [42] fumonisins, another class of fungal mycotoxins produced by Fusarium species and other fungal species that contaminate food, mainly grain, represent a significant hazard to the food chain [43]. For example, the consumption of fumonisin-containing maize retarded the growth of Tanzanian infants and adult celiac patients consumed higher levels of fumonisin (0.395 μg/kg) than non-celiacs (0.029 μg) [44,45]. Here, we present several reported studies designed to overcome fumonisin production and toxicity. • Plant essential oils (Cinnamomum zeylanicum, Coriandrum sativum, Melissa officinalis, Mentha piperita, Salvia officinalis, and Thymus vulgaris) inhibited Fusarium mycotoxin production as well as fungal contamination of wheat seeds [46]. Inhibitory effects correlated with antioxidative properties of the oils. The highest inhibition of fungal growth was after 5 days of treatment and inhibition decreased after 22 days. The authors recommend the use of essential oils as natural preservatives for stored cereals. • Fumigation of corn flour and corn kernels with allyl-, benzyl-, and phenyl isothiocyanates found in garlic resulted in a significant reduction of fumonisin content [47]. • Adsorption of the mycotoxin to a clay-based sorbent resulted in decreased bioavailability [48]. • Extrusion or alkaline (nixtamalisation) cooking of fumonisin-contaminated corn is an effective method to reduce potential toxicity of fumonisins [50,51]. • An ethanol extract of the plant Aquilegia vulgaris counteracted the oxidative stress and toxicity of fumonisins in rats [52]. • Several herbal teas and extracts protected against fumonisin B1-induced cancer promotion in rat liver [54]. Practical Applications The cited results indicate that approaches are available to reduce the production and toxic potential of fumonisin in contaminated grain. Because, as mentioned above, plant essential oils seem to inhibit the contamination of grain by fumonisin-producing fungi and the production of fumonisins, and because a large number of essential oils and their bioactive constituents have been shown to inactivate foodborne microorganisms in laboratory media and in food [55][56][57], there is a need to optimize the anti-fumonisin potential of many of these generally-recognized-as-safe (GRAS)-listed natural compounds. We are not aware of an approach that can be used to inhibit the toxicity of toxic weed seeds that also contaminate grain [58][59][60]. Ochratoxin A Another fungal toxin called ochratoxin A produced by Aspergillus and Penicillium species is reported to contaminate food [61][62][63][64][65], to induce cytotoxicity in mammalian cells [66][67][68], and toxicity and carcinogenicity and nephrotoxicity in animals and humans [69,70]. The following reported observations are relevant to the theme of the present paper: • Barberis et al. [71] found that food grade antioxidants and antimicrobials controlled the growth of the fungi and ochratoxin A production on peanut kernels. • Virgili et al. [73] found that native yeast controls the production of ochratoxin production in dry cured ham. The suggested approaches to reduce the toxic potential of aflatoxin and fumonisin are also expected to be effective against ochratoxin. Botulinum Neurotoxins Bacteria of the genus Clostridium produce one tetanus neurotoxin (TeNT) and seven different botulinum neurotoxins (BoNT/A,_/B,_/C,_/D,_/E,_/F, and /G) that cause the flaccid paralysis of botulism [75]. These neurotoxins have a similar four-domain structure but differ in both antigenic properties and interactions with intracellular targets. Only the L chain, the N-terminal domain of 50 kDa, enters the cytosol, where it cleaves the synaptosomal (SNAP-25) protein and blocks neurotransmitter (acetylcholine) release, causing peripheral neuromuscular blockade and flaccid paralysis in humans. Botulinum neurotoxin is highly toxic to humans. Serotype A (BoNT/A) is the most potent of several serotypes with an LD 50 of 0.8 µg for a human weighing 70 kg [76]. Medical treatment for botulism is a major challenge [77,78]. Although rare, outbreaks of foodborne botulism are reported to occur worldwide. In the United States, Juliao et al. [79] reported that a commercially produced hot dog chili sauce seems responsible for four cases of type A botulism and Date et al. [80] reported on three outbreaks of foodborne botulism caused by unsafe canning of vegetables. These outbreaks may be the result of survival of Clostridium botulinum spores during preparation of these foods. Different food categories are reported to be susceptible to contamination by Clostridium botulinum pathogens. These include baked products [81][82][83], dairy products [84], fresh mussels [85] and especially canned fruits and vegetables [86]. The following observations are relevant to the theme of the present review: • Studies by Daifas et al. [87] revealed that a commercial mastic resin and its essential oil in ethanol solution inhibited the growth of proteolytic strains of Clostridium botulinum in media. The anti-botulinal activity was greater when the test substances were applied in the vapor state than in solution. The test substances did not, however, inhibit neurotoxin production in challenge studies with the bacteria in English-style crumpets but the authors suggest that these natural products have the potential to inhibit pathogenic bacteria in bakery products. • A reduced level of nitrite (75 mg/kg) inhibited the toxigenesis of Clostridium botulinum type B in meat products [88]. • The combined treatment with chlorine and lactic acid inhibited both E. coli O157:H7 and Clostridium sporogenes in spinach packaged in modified atmospheres [89]. • The thearubigin polymeric fraction of black tea blocked the toxicity of the botulism toxin by binding (chelation) to the metalloproteinase part of the toxin [90][91][92]. • Kaempfenol, kaempferol, and quercetin glycosides isolated from black tea inhibited the neuromuscular inhibitory effects of botulinum neurotoxin A in mouse phrenic nerve-diaphragm preparations [93]. • Ethyl acetate extracts of several teas mixed with botulinum neurotoxin type A also prevented neuromuscular blockade of a mouse phrenic nerve-diaphragm preparation [94] with an order of potency of the extracts of black tea > oolong tea > roasted tea > green tea (no effect). • Water-soluble fractions of the stinging nettle leaf extract inhibited the protease activity of botulinum neurotoxin type A but not type B [95]. • Chicoric acid isolated from the herbal plant Echinacea is a potent exosite inhibitor of BoNT/A with a synergistic effect when combined with an active site inhibitor [76]. • The natural compound lomofungin inhibited the BoNT serotype A light chain metalloproteinase (LC) by nonclassical inhibition kinetics [96]. Šilhár et al. [76] state that the ability to inhibit an exosite by a small molecule requires disruption of protein-protein interactions and that natural products have the potential to act as new drugs in the treatment of botulinum neurotoxicity. The anti-toxin effect of black tea theaflavins and thearubigins and other polyphenolic compounds may result from covalent binding of the botulinum neurotoxin, possibly as illustrated in Figure 3, which depicts sites on the toxin molecule susceptible to inactivation [97]. Practical Applications The cited studies suggest that natural pure compounds and plant extracts added to food have the potential to help prevent botulism. Because commercial teas vary widely in their content of catechins and theaflavins [98,99], consumers have a choice of selecting teas with a high content of these anti-toxin compounds. Based on the above mentioned mechanism of inhibition of the botulinum toxin by natural polyphenolic compounds, it is likely that consumption of phenolic-rich fruits and vegetables may help protect against botulism. In addition, because there seems to be no available drug therapy, polyphenolic-rich whole foods and their bioactive compounds should also be evaluated for their medicinal properties. Finally, Juneja and colleagues [100,101] previously found that carvacrol (the main ingredient of oregano essential oil), oregano oil, cinnamaldehyde (the main ingredient of cinnamon oil), thymol (the main ingredient of thyme oil) and a green tea leaf extract inhibited the germination and outgrowth of the related spore-forming Clostridium perfringens pathogens in meat. It is not known whether these natural products would also inhibit Clostridium botulinum and/or the release of the neurotoxin from the pathogens in food so this aspect merits study. zinc-containing metalloproteinase susceptible to chelation by catechin phenolic OH groups; intramolecular disulfide bond of the heavy chain (disulfide site-1); intermolecular disulfide bond linking the light and heavy chains (disulfide site-2). The disulfide bonds are susceptible to reduction and/or sulfhydryl-disulfide interchange initiated by sulfhydryl compounds such as N-acetyl-L-cysteine. Adapted from [102]. Cholera Toxin (CT) Ingestion of drinking water or cooked shellfish contaminated by the Gram-negative bacterium Vibrio cholerae serotypes O1 and O139 causes the potentially fatal disease cholera, characterized by profuse diarrhea [103]. Diarrhea results from the interaction of the cholera enterotoxin secreted by the bacteria with adenylate cyclase of the mucosa of the digestive tract, causing water flow from the open ion channels through osmosis. A major challenge is to overcome emerging antibiotic-resistant strains and inhibit the biological effects of the toxin. Here, we will briefly review reported studies on the inhibition of the toxin by components of the diet. Toda et al. [104,105] found that tea catechins protected against experimental infection by Vibrio cholerae O1 bacteria and it has been shown that other polyphenolic compounds also inhibited the virulence of cholera toxin [106]. Indeed, a catechin from green tea bound to and interfered with the cell binding and internalization of cholera toxin [107]. Shimamura [108] found that SH-containing compounds such as cysteine and reduced glutathione inhibited the production of cholera toxin by Vibrio cholerae and that added vitamin B 12 reversed the inhibition. These observations suggest that inhibition may result from the formation of an -S-S-bond between added thiols and toxin SH groups via sulfhydryl-disulfide interchange by mechanisms described in detail elsewhere for the inactivation of soybean inhibitors of digestive enzymes and other disulfide-containing protein toxins [2,109,110]. The B pentamer of the AB5 composition of CT binds to cell membranes and the A subunit acts as an enzyme after cleavage [111]. Becker et al. [111] examined the inhibition of galactose-rich natural substances of two AB5 enterotoxins, the heat-labile LT-1 toxin produced by E. coli and CT produced by V. cholerae, to bind to sites of ganglioside receptor GM1 using a specially adapted GM-1 coated microtiter-well ELISA. Compared to pure milk saccharides, skim milk powder interfered with both LT-I and CT inhibition. Fenugreek seeds were also highly active. The high inhibitory activity of binding of the toxin to the cell receptor sites by components of skim milk powder compared to numerous other galactose-containing substances evaluated may be due to the presence in skim milk of not only galactose-containing compounds but also glycopeptides and glycolipids, which may act synergistically. Related studies by Sinclair et al. [112] showed that sialyloligosaccharides derived from egg yolk inhibited binding of CT to GM1-OS immobilized to artificial planar lipid membranes. The authors suggest that these food-grade molecules could be used as health-promoting food additives. Rasmussen et al. [103] used a high-throughput screening assay of an ~8,000 compound structurally diverse chemical library for inhibitors of V. cholerae motility, an activity required by the pathogens to colonize the small intestine. They discovered a group of quinazoline-2,4-diamino analogs that completely suppressed motility. The assay merits use to screen for the inhibition of motility by natural compounds. These authors use the term 'chemical genetics' to describe how small molecules can change the way protein toxins behave in real time directly rather indirectly by manipulating their genes. Chaterjee et al. [113] examined whether red chili (Capsicum annuum), which contains capsaicin and other bioactive compounds, can suppress CT production in V. cholerae. They found that a methanol extract of the peppers and capsaicin strongly inhibited the CT production of various serogroups. The authors describe repression of transcription of virulence genes associated with the inhibition. As is the case with teas mentioned earlier, consumers have a choice of selecting peppers with a high content of capsaicin and other pungent pepper compounds [114,115]. Yamasaki et al. [116] note that although extracts from plants such as 'apple', 'daio', 'elephant garlic, 'green tea', 'guazuma', and 'hop' have been shown to inhibit bacterial growth of V. cholerae, inhibiting bacterial growth may impose selective pressure facilitating development of resistant strains. They suggest that based on the above-mentioned results, a regular intake of chili peppers or other spices could prophylactically and/or therapeutically protect against cholera. Velázquez et al. [117] tested in a rat model for anti-secretory activity of (-)-epicatechin, isolated from the Chianthodendron pentadactylon plant used in Mexican traditional medicine. The inhibitory effect of the catechin on CT was higher (56.9% inhibition) than on the E. coli toxin (24.1% inhibition). Computational molecular docking showed that the epicatechin interacted with four amino acid residues (Asn 103, Phe 31, Phe 223, and The 78) of the catalytic site of the toxin. The authors concluded that these studies support the use of the plant to treat diarrhea. Pigmented rice bran inactivated multiple pathogens including Vibrio cholerae isolated from patients suffering from diarrhea [118,119]. It is not known whether bioactive rice brans can also inactivate cholera and other toxins. Practical Applications The cited evidence suggests that natural substances are potential prophylactic and/or therapeutic agents that can be used to protect animals and humans against water and foodborne CT-mediated disease. Specifically, galactose-rich natural compounds, skim milk, fenugreek seeds, chili capsaicins, and (-)-epicatechin from a Mexican medicinal plant seem to be promising candidates to inhibit the toxicity of CT. It is not known whether any of these compounds will be effective against cholera in humans. In addition, preclinical and safety evaluation of a multivalent oral vaccine shows promise for further testing in humans [120]. Shiga/Shiga-like Toxins Shiga toxin is produced by Shigella, and the structurally similar Shiga-like toxins are produced by enterohemorrhagic strains of E. coli (EHEC), such as O157:H7. EHEC are pathogens of major importance for food safety, causing foodborne illnesses, ranging from mild diarrhea to a life-threatening complication known as hemolytic uremic syndrome (HUS). The bacteria produce a family of related toxins that comprise two major groups, verocytotoxin 1 (Stx1) and verocytotoxin 2 (Stx2). Stx2 is reportedly several orders of magnitude more toxic than Stx1. Stx2 is relatively heat stable and is not inactivated by pasteurization [121]. In an important in vivo study, Rasooly et al. [122] found for the first time that orally ingested Stx2, previously thought to be only dangerous when administered enterically, caused histopathological changes in kidney, spleen, and thymus, and mortality in mice. The question arises as to whether adverse effects associated with exposure to Shiga toxin-producing E. coli strains are caused just by the bacteria or by ingested preformed toxin as well. The following observations are relevant to the theme of this review: • Intraperitoneal administration of 1 mg of the green tea catechin epigallocatechin gallate (ECGC) to BALB/c mice completely inhibited the lethal effect of 2 ng of Stx2 [123]. • EGCG and gallocatechin gallate (GCG) also markedly inhibited the extracellular release of Stx2 toxin from E. coli O157:H7 [124]. The mechanism of inhibition seems to involve interference by the catechins of the transfer of periplasmic proteins through the outer membrane of the bacterial cell. The cited findings indicate that tea compounds are potent inhibitors of Stx2. An unanswered question is whether tea compounds and teas can inactivate bacterial toxins present in drinking water and in liquid and solid foods. • The compound eugenol, which is present in many spices, inhibited verotoxin production in a concentration-dependent manner by E. coli O157:H7 [125]. • Glycan-encapsulated gold nanoparticles inhibited Stx1 and Stx2 [127]. The authors suggest that tailored glyconanoparticles that mimic the natural display of glycans in lipid rafts could serve as potential therapeutics for the toxins. They also note that a few amino acid changes in emerging Stx2 variants can change receptor specificity. • In an elegant review, Branson and Turnbull [128] describe mechanistic aspects of the inhibition by multivalent synthetic scaffolds, which include glycopolymers, glycodendrimers, and tailored glycoclusters, that can inhibit the binding of bacterial toxins to specific glycolipids in the cell membrane. The authors conclude that weak interactions of inhibitors can be greatly enhanced through multivalency. The safety and food-compatibility of the synthetic inhibitors need to be established before the inhibitors can be added to food. • Quiñones et al. [129] describe the development and application of an improved Vero-d2EGFP cell-based fluorescence assay for the detection of Stx2 and inhibitors of toxin activity. Grape seed and grape pomace extracts both provided strong cellular protection against Stx2 inhibition of protein synthesis (Figure 4). The identified anti-toxin compounds can be used to develop food-compatible conditions for toxin inactivation that will benefit microbial food safety, security, and human health. Figure 4. Effect of plant compounds on protein synthesis levels in Stx-treated Vero-d2EGFP cells. Protein synthesis was measured in Vero-d2EGFP cells after a 2-hour co-incubation with plant polyphenolic compounds and Stx2. Cells were co-incubated with no plant compound, 1 mg caffeic acid/mL, 1 mg red wine concentrate/mL, 0.5 mg grape pomace extract/mL, or 0.1 mg grape seed extract/mL. Adapted from [129]. • Rasooly et al. [130] discovered that freshly prepared juice from locally purchased Red Delicious apples, but not fresh juice from Golden Delicious apples, inactivated the biological activity of Stx2. However, both Golden Delicious juice and water with added 0.3% polyphenol-rich grape pomace, a byproduct of wine production, also inactivated the Shiga toxin. Additional studies with immunomagnetic beads with specific antibodies against the toxin revealed that only part of the added Stx2 in apple juice appears to be irreversibly bound to apple juice and grape pomace constituents. The authors suggest that food-compatible and safe anti-toxin compounds can be used to inactivate Shiga toxins in apple juice and possibly also in other liquid and solid foods. It would also be of interest to find out whether apple skin, olive, and oregano leaf bactericidal powders [131] would also inhibit Stx2. • Different grain fractions from pea (Pisum sativum) and faba bean (Vincia faba) inhibited adhesion of enterotoxigenic E. coli cells (ETEC) expressing adhesion and heat-labile LT toxins [132]. Because adhesion is involved in colonization of the host by the pathogens, the authors suggest that some of the fractions have the potential to protect pigs against pathogen-induced diarrhea. • The probiotic bacteria Lactobacillus plantarum isolated from a fermented milk beverage called Kefir protected Vero cells against the cytotoxicity of Stx2 present in supernatants of E. coli O157:H7 bacteria [133]. • A variety of probiotic bacteria, especially Lactobacilli, inhibited the growth E. coli strains. Whether these in vitro results can be confirmed in vivo merits study [134]. Practical Applications The identified anti-toxin compounds can be used to develop food-compatible conditions for the inactivation of Shiga toxins in food, animals, and humans that will benefit microbial food safety, security, and human health. In addition, other natural products and plant extracts have been shown to inactivate Shiga toxin-producing bacteria [135,136]. There is a need to determine whether these and related natural products inhibit the release of Shiga toxins and whether any released toxin is susceptible to concurrent inactivation in food, in the digestive tract, and after absorption into the circulation. Staphylococcus Enterotoxins Staphylococcus aureus is a major bacterial pathogen that causes clinical infection and foodborne illness, as reviewed in Rasooly and Friedman [137]. This bacterium produces a group of 21 known enterotoxins (SEs) that have two separate biological activities: they cause gastroenteritis in the gastrointestinal tract and act as a superantigen on the immune system. Functional enterotoxins bind to the alpha-helical regions of the major histocompatibility complex (MHC) class II molecules outside the peptide-binding groove of the antigen presenting cells (APCs), and also to the variable region (Vß) on T-cell receptors. The toxin then forms a bridge between T cells and APC. This event then initiates the proliferation of a large number (~20%) of T cells that induce the release of cytokines. At high concentrations, cytokines are involved in the causes of certain human and animal diseases such as atopic dermatitis and rheumatoid arthritis in humans and mastitis in dairy cows [137]. We will now briefly mentioned reported studies designed to overcome the toxicity of the SEs, especially the virulent staphylococcal enterotoxin A (SEA), a single-chain protein that consists of 233 amino acid residues and has a molecular weight of 27,078 Da. • Intraperitoneal administration of a green tea extract and of the tea catechin ECGC to BALB/c mice bound to and inhibited the staphylococcal enterotoxin B (SEB) [138]. The inhibition of the heat-resistant enterotoxin was both dose and time dependent. ECGC also inhibited Staphylococcal superantigens-induced activation of T cells both in vitro and in vivo. Because these antigens aggravate atopic dermatitis, the authors suggest that catechins may be useful in the treatment of this human disease. • Ether extracts of the herb Helichrysum italicum inhibited the production of enterotoxins (A-D) by S. aureus strains in culture media, suggesting that the extract interfered with the production of the enterotoxins [139]. • Lactobacillus starter cultures inhibited both growth of S. aureus and enterotoxin production in sausages during fermentation [140]. The authors suggest that intestinal Lactobacillus strains could be used as a starter culture to produced microbiologically safe meat products. • Microbial growth and SEA production rates of S. aureus in the presence of undissociated lactic acid can be used as indicators of bacterial growth and SEA formation during initial stages of cheese production [141]. • The sour-milk beverage Kefir with added alimentary fiber inhibited pathogenic properties of S. aureus in humans [142]. The growth inhibition of S. aureus by lactic acid produced from starter culture may be the cause of growth inhibition of the pathogen in pasteurized milk and cheese [143]. • An ethanol extract from the bulb of the Eleutherine americana plant inhibited both S. aureus strains and enterotoxin A-D production in broth and cooked pork [144]. The extract at 2 mg/mL delayed production of toxins A and C for 8 and 4 h, respectively, whereas toxin B was not detected in the pork after 48 h. The authors suggest that the ability of the extract to inhibit lipase and protease enzymes and to delay enterotoxin production in food indicates that it could be a novel additive against S. aureus in food. • The 12-carbon fatty acid monoether dodecylglycerol (DDG) was more effective than glycerol monolaurate (GML) in inhibiting S. aureus growth in vitro [145]. By contrast, GML was more effective than DDG in reducing mortality, suppressing TNF-α, S. aureus growth and exotoxin production, and mortality in a rabbit model. The authors suggest that GML has the potential to be an effective anti-staphylococcal topical anti-infective candidate. • Dilutions of freshly prepared apple juices and a commercial apple polyphenol preparation (Apple Poly ® ) inhibited the biological activity of SEA in a spleen cell assay ( Figure 5) [146]. Studies with antibody-coated immunomagnetic beads bearing specific antibodies against the toxin showed that SEA added to apple juice seems to be largely bound to the juice constituents. Figure 6 depicts a possible mechanistic scheme for the inhibition. formation of a bridge between antigen presenting cells (APC) and T cells that results in the induction of T-cell proliferation; and (B) the inhibition of T-cell proliferation by added pure apple juice that disrupt the connection between APC and T cells. The net beneficial result of these events is the prevention of release and the consequent adverse effects induced by cytokines. Abbreviations: MHC, major histocompatibility complex; TCR, T-cell receptor. Adapted from [146]. • A dilution series of the olive compound 4-hydroxytyrosol and a commercial olive powder containing approximately 6% 4-hydroxytyrosol and 6% of other phenolic compounds inactivated the pathogen [147]. Two independent assays, (5-bromo-2-deoxyuridine (BrdU) incorporation into newly synthesized DNA, and glycyl-phenylalanyl-aminofluorocoumarin proteolysis) showed that the olive compound also inactivated the biological activity of SEA at concentrations that were not toxic to spleen cells used in the assay. Efforts to determine the inhibition of the toxin by the olive powder were not successful because the powder was cytotoxic to the spleen cells at concentrations that are effective against the bacteria. The results (Figure 7) show that the olive compound can be used to inactivate both the bacteria and the toxin produced by the bacteria and that the use of cell assays to determine inhibition can only be done with concentrations of the inhibitor that are not toxic to cells. Adapted from [147]. • The Chinese herbal extract anisodamine inhibited the S. aureus toxin in human blood mononuclear cells [148]. • Hemoglobin inhibits the production of S. aureus exotoxins in a cell assay [149]. • Human monoclonal antibodies against SEB possess high affinity and toxin neutralization qualities essential for any therapeutic agent [151] . • Several synthetic peptides inhibited the emetic and superantigenic activities of SEA in house musk shrews [152]. • Apple and olive powders and oregano leaves exhibited exceptionally high activity at nanogram levels against S. aureus [131]. Practical Applications The experimental findings suggest that apple, olive, and tea antioxidant and antimicrobial compounds and lactic acid can neutralize the biological activity of SEA. Formulations containing these food ingredients merit further study to define chemopreventive effects against SEA-induced mastitis in dairy cows and atopic dermatitis and rheumatoid arthritis in humans. Ricin Ricin is a heterodimeric highly toxic protein produced by the seeds of the castor plant Ricinus communis. In the plant, ricin is translated as a single 66-kDa polypeptide chain protein that is activated intracellularly by proteolytic cleavage to form the active 32-kDa A chain containing enzymatic activity [153,154]. The B chain is essential for the toxin's entry into the cell [155]. The A chain is linked by a disulfide bond to the 34-kDa B chain lectin, which has an affinity to bind to cell surface carbohydrates such as galactose, galactosamine, or N-acetylgalactosamine present in glycoproteins and glycolipids. The toxin enters the cell by endocytosis in membrane vesicles and is transported to endosomes, and then into the cytosol. After the disulfide bond is reduced, the ricin A chain inactivates ribosomes by removing the 28S ribosomal RNA in the 60S ribosomal subunit at the adenine nucleotide (A4324) near the 3' end of the polynucleotide chain [156]. This deletion results in the failure of elongation factor-2 to bind to the ribosome and thus inhibits protein synthesis, resulting in cell death. The low lysine content of the A chain reduces its susceptibility to proteolytic degradation in the cytosol [157]. Ricin is a highly toxic protein. A single molecule of ricin reaching the cytosol can kill that cell as a result of inhibition of protein synthesis [153]. A search of the literature failed to reveal any reports of natural compounds that can inhibit the biological activity of ricin, except for the recent report by Rasooly et al. [158] who showed by three independent assays that components of reconstituted powdered milk have a high binding affinity to ricin. Milk can competitively bind to and reduce the amount of toxin available to asialofetuin type II, which is used as a model to study the binding of ricin to galactose cell-surface receptors. An activity assay by immuno-PCR showed that milk can competitively bind to 1 ng/mL of ricin, reducing the amount of toxin uptake by the cells and thus inhibit ricin's biological activity (Figure 8). The inhibitory effect of milk on ricin activity in Vero cells was at the same level as by anti-ricin antibodies. By contrast, milk did not inhibit the activity at higher ricin concentrations or that of another ribosome-inactivating protein, Stx2 produced by pathogenic E. coli O157:H7 (see above). Unlike ricin, which is internalized into the cells via a galactose-binding site, Stx2 is internalized through the cellsurface receptor glycolipid globotriaosylceramides Gb3 and Gb4. It seems that ricin toxicity may possibly be reduced by a widely consumed natural liquid food and/or by some of its components. Related studies showed that (a) a sensitive in vitro assay can be used to detect levels as low as 200 pg/mL of biologically active ricin in food [159]; (b) a virtual screening program of 50,000 compounds enabled the discovery of new classes of ricin toxin inhibitors [160]; and (c) intra-tumoral injection of a ricin-loaded hydrogel may be useful for interstitial chemotherapy in pancreatic cancer [161]. Practical Applications The oil extracted from castor beans has been used as a lubricant, as a component of plastics, as a fungicide, and in the synthesis of biodiesel fuels. By contrast, the protein-rich byproduct, called castor bean cake or castor bean mash, that remains after cold-press extraction of castor oil cannot be used as an animal feed because it contains ricin and allergenic (2S albumin) proteins [162][163][164]. Fernandes et al. [162] found that solid-state fermentation of the cake with Aspergillus niger eliminated all ricin after 24 h. In addition, treatment of the cake with calcium hydroxide or calcium oxide completely eliminated both the ricin toxicity and albumin allergenicity. Animal feeding studies of the treated castor cake are needed to confirm the safety of the detoxified product. In view of the high affinity of milk compounds for ricin mentioned earlier, it would also be of interest to determine whether individual milk compounds, skim milk, or fermented milk products (e.g., Kefir or yogurt) can neutralize ricin in castor bean cake. α-Chaconine The potato glycoalkaloids α-chaconine and α-solanine act as natural defenses against insects and other pests, reviewed in [165]. In some potato varieties, the concentrations of these compounds can be high. High levels may be toxic to humans as well as insects. As part of a program of improvement in the safety of potatoes using molecular plant genetics and parallel food safety evaluation, we evaluated the effect of several potato glycoalkaloids and aglycones in the frog embryo teratogenesis assay-Xenopus (FETAX) [166]. α-Chaconine was found to be teratogenic and more embryotoxic than α-solanine, in terms of the median lethal concentration (LC50) after 96 hr of exposure, the concentration inducing gross terata in 50% of the surviving frog embryos (96-hr EC50, malformation), and the minimum concentration needed to inhibit the growth of the embryos. Since these two compounds differ only in the nature of the carbohydrate side chain attached to the 3-OH group of solanidine, the side chain appears to be an important factor in governing teratogenicity. We also found that mixtures of α-chaconine and α-solanine caused synergistic malformations and mortality and that the aglycones demissidine, solanidine, and solasodine without a carbohydrate side chain were less toxic than the glycosides. The FETAX can be used for: (a) predicting the teratogenic potential of Solanaceae alkaloids, glycoalkaloids and related natural products; and (b) facilitating experimental approaches to suppress plant genes and enzymes that control the biosynthesis of the most toxic compounds. In related studies, we discovered that folic acid, the folic acid analog methotrexate, glucose-6-phosphate, and oxidized nicotine adenine dinucleotide (NADP) protected the frog embryos against chaconine-induced malformations (severe anencephaly in the brains and less severe malformations in the other organs [167][168][169]. Practical Applications The mentioned compounds have the potential to protect against neural tube defects and other malformations in humans. This suggestion is reinforced by the reported observations that folic acid consumption during pregnancy seems to help placental development in pregnant women and protect against neural tube defects in newborns [170]. We do not know whether glucose-6-phosphate will exhibit similar beneficial effects. Table 1 lists all the inhibitors mentioned in the text. Conclusions In summary, the exploration of the concept of inhibiting the toxicological potential of natural toxins produced by fungi, bacteria, and plants by multiple approaches designed to prevent them from interacting with living cells has the potential of benefitting food safety and human health. It also contributes to our understanding of basic mechanisms of toxicity at the molecular level and should lead to the discovery of new ways to treat contaminated foods and people and to the development of new prophylactic and therapeutic compounds. To facilitate further progress, future studies need to address one or more of the following aspects of toxin inhibition: • Determine whether natural compounds can concurrently reduce both pathogens and toxins produced by the pathogens. • Define additive and/or synergistic effects of mixtures of natural toxin inhibitors. • Compare efficacy of natural inhibitors against toxins in different foods, including fruit and vegetable juices, milk and cheeses, cereal grains, and meat and poultry products. • Develop anti-toxin films and coatings to protect foods against contamination by toxins [171]. • Determine whether anti-toxin effects of natural compounds and extracts in vitro can be duplicated in vivo, especially in humans. • Determine the biological significance of low levels of residual AFB1 and ricin, which seem to stimulate cell growth. • Explore the use of molecular biology anti-sense RNA methods to suppress genes that govern the biosynthesis of plant and microbial toxins.
2016-03-22T00:56:01.885Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "dc1445d6ede12dc6ebd278a1a1121fe7b8adc00e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/5/4/743/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc1445d6ede12dc6ebd278a1a1121fe7b8adc00e", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1192798
pes2o/s2orc
v3-fos-license
Obesity and Cytokines in Childhood-Onset Systemic Lupus Erythematosus Background. In systemic lupus erythematosus (SLE), atherosclerosis is attributed to traditional and lupus related risk factors, including metabolic syndrome (MetS), obesity, and inflammation. Objective. To evaluate the association between obesity, measures of body fat content, serum tumor necrosis factor alpha (TNF-α), and interleukin (IL)-6 and -10 levels in childhood-onset SLE (cSLE). Methods. We screened consecutive cSLE patients followed up in the Pediatric Rheumatology Outpatient Clinic of the State University of Campinas. cSLE patients were assessed for disease and damage. Obesity was definite as body mass index (BMI) ≥30 kg/m2. Serum TNF-α, IL-6, and IL-10 levels were measured by ELISA. Dual-energy X-ray absorptiometry was used to determine total fat mass, lean mass, and percent of body fat. Results. We included 52 cSLE patients and 52 controls. cSLE patients had higher serum TNF-α  (P = 0.004), IL-6 (P = 0.002), and IL-10 (P < 0.001) levels compared to controls. We observed higher serum TNF-α  (P = 0.036) levels in cSLE patients with obesity. An association between serum TNF-α levels and body fat percent (P = 0.046) and total fat mass on trunk region (P = 0.035) was observed. Conclusion. Serum TNF-α levels were associated with obesity and body fat content in cSLE. Our finding suggests that obesity may contribute to the increase of serum TNF-α levels in cSLE. Introduction Systemic lupus erythematosus (SLE) is a chronic systemic inflammatory disease affecting mainly women during childbearing age [1]. Although life expectancy has improved significantly, no changes in morbidity and mortality related to cardiovascular disease (CVD) have been observed in SLE patients in the past decades [2,3]. In addition to traditional risk factors, many lupus-specific factors are linked to the increased risk of CVD observed in SLE [4][5][6]. Obesity-associated systemic inflammation is characterized by increased circulating proinflammatory cytokines and activation of several kinases that regulate inflammation [7][8][9]. Recent evidence supports that obesity-induced inflammation is mediated primarily by immune cells such as the macrophages and T lymphocytes present in metabolic tissues [9]. Adipose tissue derived cells can produce inflammatory cytokines, such as tumor necrosis factor alpha (TNF-), interleukin (IL) 6, and IL-10 [10,11]. TNF-and IL-6 are proinflammatory cytokines associated with an increased insulin resistance, inhibition of insulin receptor autophosphorylation, and signal transduction. These mechanisms lead to insulin resistance, Patients and Methods 2.1. Subjects. Fifty-two consecutive cSLE patients, recruited from the Pediatric Rheumatology Outpatient Clinic of the State University of Campinas were included in this study. Patients were included in the present study if they (i) fulfilled at least four criteria of the American College of Rheumatology (ACR) [19]; (ii) were below 18 years of age at disease onset; and (iii) had a follow-up duration of at least 6 months (time necessary to evaluate damage index). Fifty-two healthy volunteers (caregivers or students) matched by age, gender, and sociodemographic characteristics were included as a control group. None of the controls had any history of chronic disease, including autoimmune diseases. This study was approved by the ethics committee at our institution, and the informed written consent was obtained from each participant and/or legal guardian. Clinical Features. All patients had their medical histories and clinical,and serological characteristics entered at the time of cSLE diagnosis into special computer database programs. Features included in this protocol were age at the onset of disease (defined as the age at which the first symptoms clearly attributable to SLE occurred), age at diagnosis (defined as the age when patients fulfilled four or more of the 1987 revised criteria for the classification of SLE [19]), and follow-up time (defined as the time from disease onset until December 2012). Total doses and length of use the of corticosteroids since the onset of disease were calculated by careful review of the medical charts. Doses of oral and parenteral corticosteroids were converted to the equivalent doses of prednisone. The cumulative dose of corticosteroids used was calculated by the sum of the daily dosages versus the time (days) of treatment. We also calculated the cumulative corticosteroid dose adjusted by weight by summing up the daily corticosteroid dose per weight at each routine visit. Disease Activity and Cumulative Damage. Disease activity was measured by the Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) [20]. SLEDAI scores range between 0 and 105, and the scores of ≥3 were considered as active disease [21]. Adjusted SLEDAI scores over time were calculated by careful review of the medical charts and preview exams [22]. Cumulative SLE-related damage in all patients was determined by using the Systemic Lupus International Collaborating Clinics (SLICC)/ACR Damage Index (SDI) [23]. Body Mass Index. Body mass index (BMI) was calculated as weight (kg) divided by height (m) squared (kg/m 2 ). Criteria used to define nutritional status were based on the World Health Organization (WHO) criteria [24]. BMI cutoff points for Brazilian children and adolescents were used for individuals between 2 and 18 years [25]. Obesity was considered when BMI was above 30 Kg/m 2 . Dual X-Ray Absorptiometry (DXA). Percentual body fat (PBF), fat mass, and lean mass were obtained by DXA scan (Hologic Discovery Wii), through Whole Body Auto Fan Beam. This scan determines total fat mass and total lean mass in kilograms in addition to total fat mass and total lean mass as a percentage of total body mass. Blood Sampling. Blood samples were collected from peripheral veins of all individuals in dry tubes and left to clot at room temperature for 30 minutes. Blood samples were then centrifuged for 15 minutes at 3000 rpm, and the serum was then stored in aliquots at −80 ∘ C for future use. We did not collect blood samples from individuals during an episode of acute or chronic infection. Cytokine Assay. Commercially available kits from R&D Systems (London, UK) were used for the measurement of serum TNF-, IL-6, and IL-10 levels by enzymelinked immunosorbent assay (ELISA), carried out in accordance with the manufacturer's instructions. The minimum detectable dose (MDD) was 0.106 pg/mL for TNF-, 0.039 pg/mL for IL-6, and 3.9 pg/mL for IL-10. Statistical Analysis. All the data were tested for their normal distribution (Kolmogorov-Smirnov test). Categorical variables were compared by 2 test. Nonnormal variables were compared by Fisher exact tests. Mann-Whitney test was used to compare anthropometric measure and laboratory studies between patients and controls. Spearman's correlation was used to correlate continuous variables (e.g., TNF-levels, SLEDAI, and SDI scores). For all analyses, value ≤ 0.05 was considered to be statistically significant. Statistical analysis was carried out using IBM SPSS Statistics 16.0 software (SPSS/IBM, Chicago, IL, USA). Demographics. We included 52 consecutive cSLE patients. Forty-seven (90.3%) were women with mean age of 17.6 years (standard deviation (SD) ± 3.7 years). Mean disease duration was 5.14 years (SD ± 4.05). The control group consisted of 52 controls (47 women) with mean age of 18.2 years (SD ± 6.4). Patients and healthy controls were statistically comparable in terms of age and sex (Table 1). Sixteen (31%) cSLE patients were overweight compared to 6 (11.5%) controls ( = 0.018). We did not observe an association between BMI and SLEDAI, SDI, and cumulative corticosteroid dose. No association between serum IL-6 and IL-10 levels and SLEDAI or SDI scores was observed. In addition, no difference in these cytokine levels in cSLE patients and controls with and without obesity was observed. Discussion Adipose tissue is known to be capable of secreting cytokines such as TNF-, IL-6, and IL-10. Therefore, the purpose of this study was to assess whether the levels of these cytokines were increased in obese cSLE when compared to nonobese cSLE and healthy controls. The observation that obese cSLE patients had higher serum TNF-levels when compared to nonobese cSLE and healthy controls is the major finding of our study. In addition, we observed that serum TNF-levels correlated with PBF and total fat mass in trunk region in cSLE. Recent studies have demonstrated that increased adipose tissue mass contributes towards an increase in chronic inflammation [26,27]. Chronic inflammation is further enhanced by inflammatory markers produced in the liver and in other organs [28]. Recently, it has been demonstrated that obesity is associated with a low-grade inflammatory process, characterized by increased circulating levels of proinflammatory cytokines such as TNF-, IL-6, and acutephase proteins (CRP) [29][30][31][32]. The mechanism underlying increased inflammation in the setting of obesity remains unclear, but it is known that mononuclear cells are activated and proinflammatory cytokines are upregulated in obese individuals [33,34]. We observed an association between serum TNF-levels and PBF and total fat mass in trunk region. Studies analyzing the association between serum TNF-and DXA scans have not been reported in cSLE so far, but studies on healthy women and type-2 diabetes patients showed an association between plasma levels of TNF-and visceral adipose tissue volume measured by CT-scan [35][36][37][38]. Previous studies have shown that visceral fat accumulation is associated with increased risk of CV risk [37]. In addition, with an increase in TNF-, a reduction in lipoprotein lipase activity in adipose tissue is observed [39]. There is also evidence that TNFhas a local effect, regulating adipocyte size in the face of increasing energy consumption [40,41]. Cytokines, such as TNF-and IL-6, are primarily involved in the early stages of the inflammatory response culminating in atherosclerosis [39,42]. Increased TNFlevels in the endothelium promote initial atheroma plaque [39,42]. However, so far, studies were not able to conclude whether TNF-is a causative factor of atherosclerosis. Both IL-6 and TNF-are expressed and secreted by human adipose tissue [43]. In obesity, increased secretion of IL-6 may contribute to metabolic dysfunction [44,45]. In addition, one previous study has shown that IL-6 correlated positively with BMI and with measures of insulin resistance in abdominal obese male subjects [45]. As previously described in adults SLE patients, we observed higher IL-6 and IL-10 levels in cSLE patients when compared to healthy controls [46][47][48][49]. However, no association with BMI was observed in our cSLE cohort. IL-10 downregulates inflammatory activation of monocytes and macrophages by transcriptional and posttranscriptional inhibition of the entire range of proinflammatory cytokines [50]. IL-10 has been shown to reduce atherosclerosis and it can be found in atheromatous plaque due to local macrophages production [50]. However, IL-10 is involved in SLE pathogenesis and it is increased in SLE patients with CVD compared to SLE patients without CVD [51,52]. In our study, we did not observe an association between sera IL-10 levels and obesity. The cytokine data were given in median (range). * < 0.05. We also did not observe an association between sera IL-6 levels and obesity. In the literature, it has been described that plasma IL-6 levels are associated with increased CV risk and observed in SLE patients with metabolic syndrome [53] and in patients with type 2 diabetes [44,54]. In a large healthy family population study where children were included, IL-6 levels were closely associated with traditional and nontraditional risk factors for atherosclerosis [55]. Although cSLE is rare, it is important to consider that one limitation of our study is the small number of patients and controls included. Corticosteroids are associated with weight gain due to increased appetite and fluid retention. Corticosteroids also cause a redistribution of fat deposition, occurring predominantly in the trunk and face [56][57][58][59]. However, we did not observe an association between serum TNF-, IL-6, and IL-10 levels and corticosteroid dose. To the best of our knowledge, this is the first study to evaluate the association of BMI, body composition and serum TNF-, IL-6, and IL-10 levels in cSLE patients. Although these cytokines have been shown to be associated with CVD in other populations, we only observed an association between serum TNF-levels and obesity, and PBF and total fat mass in trunk region. Our findings suggest that total fat mass may contribute to increased levels of serum TNFlevels in cSLE.
2016-05-12T22:15:10.714Z
2014-03-03T00:00:00.000
{ "year": 2014, "sha1": "5e9a45efe8d923247ea493916a2b5c2a3a20bec2", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jir/2014/162047.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5a5166d1e78adf92fe0b74d765045d2c8ede013", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234803234
pes2o/s2orc
v3-fos-license
SETTLEMENT OF GEOSYNTHETIC ENCASED STONE COLUMNS LIQUEFACTION CONDITION IN BOX CULVERT . When the box culvert system is placed on a sandy soil layer with a relatively low bearing capacity and is disposed to potential liquefaction, the soil layer must be repaired to avoid damages to the box culvert structure. The proposed method is Geosynthetic Encased Stone Columns (GESC) to increase the bearing capacity and anticipated the liquefaction potential. however, to meet the criteria for a stable and safe GESC soil improvement in liquefaction conditions, the value of the settlement must meet the requirements for the settlement permit limit. This research was conducted to determine the potential for liquefaction at the study location, to calculate the value of single and group settlements in liquefaction conditions and to analyze the stability of single and group settlements including safe or unsafe in liquefaction conditions. Analysis of liquefaction potential was analyzed based on SPT data using the Valera and Donovan method, and settlement analysis applied the Almeida and Alexiew method. The analysis shows that potential liquefaction due to an earthquake with a magnitude of 9.0 SR will be at a depth of 4 to 8 m. Single and group settlements (144 sets) with an installation distance of 1.2 m with a diameter of 0.4 m and at a depth of 10 m are 246.23 and 214.92 mm, respectively. The entire GESC system is considered to be in an unstable and unsafe condition against potential liquefaction and box culvert loading. . ,89,& Then in reviewing the dangers and analysis of the potential for liquefaction, the method used by Valera and Donovan aims to find the critical value of SPT or Ncrit as a determination of liquefaction or nonliquefaction conditions with the following Eq. (4) [15]- [17]. Where +2:; is the critical value of N-SPT, & is the depth of the sand layer being reviewed, then < is the depth of the groundwater level from the ground surface then the value is a function of the vibration intensity due to tectonic earthquakes. The MMI scale is determined based on the damage to buildings and things felt by humans due to the earthquake. When liquefaction, the frictional strength of the clay layer only receives 30 percent of the total overburden stress, meaning that the frictional resistance is corrected up to 30 percent [18], [19]. In this study, it is determined that the resistance of friction is corrected by 30-50% of phenomena due to liquefaction that can affect the value settlement of Geosynthetic Encased Stone Columns in single or group settlement. The purpose of this study was to see the potential for liquefaction in the sewer box review area using the Valera and Donovan method based on standard penetration test (SPT) data. then to calculate the settlement of single and group Geosynthetic Encased Stone Columns in liquefaction conditions based on the results of potential analysis liquefaction method of Valera and Donovan. and for analysis stability the settlement of Geosynthetic Encased Stone Columns in single and group against box culvert loading. . METHODS In this study, the box culvert planning located at STA 127 + 100 Trans Sumatra Toll Road Kisaran-Tebing Tinggi section Indrapura-Kisaran with a sandy soil layer that has a relatively low bearing capacity and is in an area with a high potential for earthquakes and the potential for liquefaction shown in Figure 1. In this case, the Geosynthetic Encased Stone Columns was provided as soil improvement at the STA 127 + 100 box culvert location. Design Data The data used in this study are secondary data from the results of field investigations standard penetration test (SPT) point BH-01 STA 127 +100 and laboratory testing by PT. Cipta Indah Citra and PT. PP also USU soil mechanics laboratory are shown in Table 1 and Figure 2. and other data in the form of box culvert dimensions, road cross-sections shown in figure 3. As an preliminary design planning for the box culvert design, dimensions of 1.5 x 1.5 are used with a length of 86 m according to the cross section of the road STA 127+100 shown in figure 3. Then in the initial design Geosynthetic Encased Stone Columns using a diameter of 0.4 m and 3D distance or 1.2 m with a length of 10 m using Ringtrac 6500 PM geosynthetic tubular protective material with a diameter of 0.4 m shown in figure 4.and then stone material with specifications γs is 2.2 t/m², ϕ is 34˚, C or cohesion is 0 t / m² [7], [20]. Research in this study includes several stages, including preliminary design, calculation of loading, soil cohesion analysis and correction of N on N-SPT, etc. there are several stages in this research that must be carried out in data analysis. 1. Calculating the load on the box culvert using references to SNI 1725: 2016 and SNI 1726: 2019 [21], [22]. Calculated based on the dimension data of the box culvert and the cross section of the road. 2. Perform axial, transverse and moment force analysis on the calculation results of the box culvert loading using SAP 2000 software. 3. Calculate soil cohesion along the soil layer depth and make corrections to the N value using standard penetration test (SPT) data 4. Performing critical Ncrit or N calculations along the depth of the soil layer based on standard penetration test data and determining the soil condition for potential liquefaction or non-liquefaction based on the Valera and Donovan liquefaction potential analysis method. 5. Calculating and determining the geosynthetic encased stone column design parameters in liquefied soil conditions, namely the corrected soil cohesion in the soil layer that occurs liquefaction based on the results of the analysis of the potential liquefaction of the Valera and Donovan method and several other parameters such as void ratio, soil weight, active soil pressure coefficient and passive, lateral rest pressure coefficient based on Broker and Ireland also Jaky, Poisson ratio, modulus of soil elasticity based on Webb [23]. 6. Planning a geometric pattern of the distance and diameter of the geosynthetic encased stone column based on the Raithel and Kempfert models. 7. Calculating column and soil stress, vertical stress on the column and, also calculate horizontal stress on the column and the surrounding soil based on the Raithel and Kempfert method. 8. Calculating the geotextile requirement using the Ringtrac 6500 PM to produce the horizontal geotextile stress and the total horizontal soil stress based on the Raithel and Kempfert methods 9. Perform geosynthetic settlement calculations for single and group encased stone columns using the Almeida and Alexiew method [14]. 10. Analyze settlement stability of single and group Geosynthetic Encased Stone Columns. 11. Conducting final conclusions on single and group settlement of Geosynthetic Encased Stone Columns under liquefaction conditions based on the analysis of liquefaction potential using the Valera and Donovan method . RESULTS AND DISCUSSION 3.1 Calculation of box culvert loading Calculation of box culvert loading using SNI 1725: 2016 and 1726: 2019 and loading analysis using SAP 2000 [21], [22]. The following are the results shown in Table 2. Soil Cohesion Analysis Secondary data obtained were processed by data processing in the form of soil cohesion analysis and N correction on the N-SPT data. The results are shown in Table 3. Analysis of the Liquefaction Potential of the Valera and Donovan method In the analysis of the liquefaction potential of the Valera and Donovan method, the largest earthquake data in the last 100 years was used, namely the Aceh earthquake in 2004, the magnitude of the earthquake was 9.0 SR including the maximum intensity on the MMI level IX scale [24]. The value of with MMI level IX was obtained values of 16 blows/feet [15]- [17]. !"#$ at a depth of 0-24 m at the test point BH-01 STA 127 +100. If > , which means that there is no liquefaction in the existing depth with a 9.0 SR earthquake, then if the results are < , which means the soil is in the existing depth of liquefaction with a 9.0 SR earthquake, the results of the analysis and calculation of the potential liquefaction of the Valera and Donovan method can be seen in Table 4. Table 4 shows the results at a depth of 4-8 m, there will be liquefaction, so that the area is safe at a depth of more than 8 m. Settlement Geosynthetic Encased Stone Column Design Parameters The GESC design parameters are determined in liquefaction conditions at a depth of 4-8 m based on the analysis of the liquefaction potential of the Valera and Donovan method, namely the value of FS or in this case the corrected cohesion of 30% at a depth of 0 to 8 m. and corrected 50% at a depth of 9 to 24 m is shown in Table 5. Geometric Plan Geosynthetic Encased Stone Columns Diameter GESC is 0.4 m with a distance of 1.2 m with a rectangular pattern. then calculate several parameters including calculated area of the column (Ac) see in Eq. (5). Calculation of vertical and horizontal stress Column and Soil Vertical Stress Calculation of the stress received by the stone column and surrounding soil is calculated by multiplying the stress due to the box culvert load by the stress ratio. For vertical stress on column refer to Eq. (12). And vertical stress on soil refer to Eq. (13). After that, the calculation of the vertical stress on the soil and stone column per soil layer is shown in Table 6. due to loading on the box culvert structure produces horizontal pressure. And The summary of the horizontal stresses from the column (σhc) and the horizontal stresses from the surrounding soil (σhs) is shown in Table 7 and 8. Horizontal stress calculation after encased is installed From Table 7 and 8, it can be seen that the soil is not able to withstand the horizontal stress from the column because ( ℎ + > ℎ & ) it requires a geotextile. calculation of ℎ D8, with the Ringtrac 6500 PM high modular low creep geotextile encased material refer to Eq. (14)- (15). After obtaining the horizontal stress that the geotextile is able to withstand, it can be added with the horizontal stress of the soil in an effort to withstand the horizontal stress of the column. A summary of these conditions can be seen in Table 8. Single Settlement of Geosynthetic Encased Stone Columns In the results of the design parameters, the calculation of vertical and horizontal stresses can be calculated settlement consisting of Layer 1 with a length of 2 m at a depth of 0-2 m and Layer 2 with a length of 8 m at a depth of 3-10 m. with the modulus of constrain, it is determined by an average value of 8D above and 4 D down of 3757.9 t / m². The following is the calculation of a single settlement Geosynthetic Encased Stone Columns using the Almeida and Alexiew method at the length of the Geosynthetic Encased Stone Columns 0-2 m depth refer to Eq. From the calculation results, the results can be formulated in Table 9. Calculation of the Settlement of the 144 Geosynthetic Encased Stone Columns group using the Almeida and Alexiew method. The calculation results are then recapitulated in Table 10. Stability Analysis on Single Settlement of Geosynthetic Encased Stone Columns From the calculation results shown in Table 9 for a single settlement Geosynthetic encased stone columns, it was found that 246.23 mm exceeded the permit requirements of 25.4 mm. thus the single settlement of the
2021-05-21T16:57:55.654Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "74bef7106a6b5cd0f148a1efa8c399a8f7085f07", "oa_license": "CCBY", "oa_url": "http://ojs.pnb.ac.id/index.php/LOGIC/article/download/2259/1663", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "351a333e595e129464bdf663ee12389f0b51b056", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
261276837
pes2o/s2orc
v3-fos-license
Input margins can predict generalization too Understanding generalization in deep neural networks is an active area of research. A promising avenue of exploration has been that of margin measurements: the shortest distance to the decision boundary for a given sample or its representation internal to the network. While margins have been shown to be correlated with the generalization ability of a model when measured at its hidden representations (hidden margins), no such link between large margins and generalization has been established for input margins. We show that while input margins are not generally predictive of generalization, they can be if the search space is appropriately constrained. We develop such a measure based on input margins, which we refer to as `constrained margins'. The predictive power of this new measure is demonstrated on the 'Predicting Generalization in Deep Learning' (PGDL) dataset and contrasted with hidden representation margins. We find that constrained margins achieve highly competitive scores and outperform other margin measurements in general. This provides a novel insight on the relationship between generalization and classification margins, and highlights the importance of considering the data manifold for investigations of generalization in DNNs. Introduction Our understanding of the generalization ability of deep neural networks (DNNs) remains incomplete.Various bounds on the generalization error for classical machine learning models have been proposed based on the complexity of the hypothesis space [1,2].However, this approach paints an unfinished picture when considering modern DNNs [3].Generalization in DNNs is an active field of study and updated bounds are proposed on an ongoing basis [4,5,6,7]. A complementary approach to developing theoretical bounds is to develop empirical techniques that are able to predict the generalization ability of certain families of DNN models.The 'Predicting Generalization in Deep Learning' (PGDL) challenge, exemplifies such an approach.The challenge was held at NeurIPS 2020 [8] and provides a useful test bed for evaluating complexity measures, where a complexity measure is a scalar-valued function that relates a model's training data and parameters to its expected performance on unseen data.Such a predictive complexity measure would not only be practically useful but could lead to new insights into how DNNs generalize. In this work, we focus on classification margins in deep neural classifiers.It is important to note that the term 'margin' is, often confusingly, used to refer to 1) output margins [9], 2) input margins [10], and 3) hidden margins [11], interchangeably.Here (1) is a measure of the difference in class output values, while (2) or (3) is concerned with measuring the distance from a sample to its nearest decision boundary in either input or hidden representation space, respectively.In this work, we focus on input and hidden margins. While margins measured at the hidden representations of deep neural classifiers have been shown to be predictive of a model's generalization, this link has not been established for input space margins.We show that, in several circumstances, the classical definition of input margin does not predict generalization, but a direction-constrained version of this metric does: a quantity we refer to as constrained margins.By measuring margins in directions of 'high utility', that is, directions that are expected to be more useful to the classification task, we are able to better capture the generalization ability of a trained DNN. We make several contributions: 1. Demonstrate the first link between large input margins and generalisation performance, by developing a new input margin-based complexity measure that achieves highly competitive performance on the PGDL benchmark and outperforms several contemporary complexity measures. 2. Show that margins do not necessarily need to be measured at multiple hidden layers to be predictive of generalization, as suggested in [11]. 3. Provide a new perspective on margin analysis and how it applies to DNNs, that of finding high utility directions along which to measure the distance to the boundary instead of focusing on finding the shortest distance. Background This section provides an overview of existing work on 1) measuring classification margins and their relationship to generalization, and 2) the PGDL challenge and related complexity measures. Classification Margins and Generalization Considerable prior work exists on understanding classification margins in machine learning models [12,13].The relation between margin and generalization is well understood for classifiers such as support vector machines (SVMs) under statistical learning theory [1].However, the non-linearity and high dimensionality of DNN decision boundaries complicate such analyses, and precisely measuring these margins is considered intractable [14,15]. A popular technique (which we revisit in this work) is to approximate the classification margin using a first-order Taylor approximation.Elsayed et al. [16] use this method in both the input and hidden space, and then formulate a loss function that maximizes these margins.However, while this results in a measurable increase in margin, it does not result in any significant gains in test accuracy.In a seminal paper, Jiang et al. [11] utilize the same approximation in order to predict the generalization gap of a set of trained networks by training a linear regression model on a summary of their hidden margin distributions.Natekar and Sharma [17] demonstrate that this measure can be further improved if margins are measured using the representations of Mixup [18] or augmented training samples.Similarly, Chuang et al. [6] introduce novel generalization bounds and slightly improve on this metric by proposing an alternative cluster-aware normalization scheme (k-variance [19]). Input margins are generally considered from the point of view of adversarial robustness, and many techniques have been developed to generate adversarial samples on or near the decision boundary. Examples include: the Carlini and Wagner Attack [20], Projected Gradient Descent [21], and DeepFool [22].Some of these studies have investigated the link between adversarial robustness and generalization, often concluding that an inherent trade-off exists [23,24,25].However, this conclusion and its intricacies are still being debated [26]. Yousefzadeh and O'Leary [14] formulate finding a point on the decision boundary as a constrained minimization problem, which is solved using an off-the-shelf optimization method.While this method is more precise, it comes at a great computational cost.To alleviate this, dimensionality reduction techniques are used in the case of image data to reduce the number of input features.The same formulation was later applied in [27] without any prior dimensionality reduction, at the expense of a significant computational burden. In this work we propose a modification to the Taylor approximation of the input classification margin (and its iterative alternative DeepFool) in order for it to be more predictive of generalization. Predicting Generalization in Deep Learning The PGDL challenge was a competition hosted at NeurIPS 2020 [8].The objective of this challenge was to design a complexity measure to rank models according to their generalization gap.More precisely, participants only had access to a set of trained models, along with their parameters and training data, and were tasked with ranking the models within each set according to their generalization gap.Each solution was then evaluated on how well its ranking aligns with the true ranking on a held-out set of tasks, which was unknown to the competitors. In total, there are 550 trained models across 8 different tasks and 6 different image classification datasets, where each task refers to a set of models trained on the same dataset with varying hyperparameters and subsequent test accuracy.Tasks 1, 2, 4, and 5 were available for prototyping and tuning complexity measures, while Task 6 to 9 were used as a held-out set.There is no task 3. The final average score on the test set was the only metric used to rank the competitors.Conditional mutual information (CMI) is used as evaluation metric, which measures the conditional mutual information between the complexity measure and true generalization gap, given that a set of hyperparameter types are observed.This is done in order to prevent spurious correlations resulting from specific hyperparameters, a step towards establishing whether a causal relationship exists. All models were trained to approximately the same, near zero, training loss.Note that this implies that ranking models according to either their generalization gap or test accuracy is essentially equivalent. Several interesting solutions were developed during the challenge: In addition to the modification of hidden margins mentioned earlier, the winning team [17] developed several prediction methods based on the internal representations of each model.Their best-performing method measures clustering characteristics of hidden layers (using Davies-Bouldin Index [28]), and combines this with the model's accuracy on Mixup-augmented training samples.In a similar fashion, the runners-up based their metrics on measuring the robustness of trained networks to augmentations of their training data [29]. After the competition's completion, the dataset was made publicly available, inspiring further research: Schiff et al. [30] generated perturbation response curves that 'capture the accuracy change of a given network as a function of varying levels of training sample perturbation' and develop statistical measures from these curves.They produced eleven complexity measures with different types of sample Mixup and statistical metrics. While several of the methods rely on using synthetic samples (e.g.Mixup), Zhang et al. [31] take this to the extreme and generate an artificial test set using pretrained generative adversarial networks (GANs).They demonstrate that simply measuring the classification accuracy on this synthetic test set is very predictive of a model's generalization.While practically useful, this method does not make a link between any characteristics of the model and its generalization ability. Theoretical approach This section provides a theoretical overview of the proposed complexity measure.We first explain our intuition surrounding classification margins, before mathematically formulating constrained margins. Intuition A correctly classified training sample with a large margin can have more varied feature values, potentially due to noise, and still be correctly classified.However, as we will show, input margins are not generally predictive of generalization.This observation is supported by literature regarding adversarial robustness, where it has been shown that adversarial retraining (which increases input margins) can negatively affect generalization [23,25]. Stutz et al. [26] provide a plausible reason for this counter-intuitive observation: Through the use of Variational Autoencoder GANs they show that the majority of adversarial samples leave the class-specific data manifold of the samples' class.They offer the intuitive example of black border pixels in the case of MNIST images, which are zero for all training samples.Samples found on the decision boundary which manipulate these border pixels have a zero probability under the data distribution, and they do not lie on the underlying manifold. We leverage this intuition and argue that any input margin measure that relates to generalization should measure distances along directions that do not rely on spurious features in the input space.The intuition is that, while nearby decision boundaries exist for virtually any given training sample, these nearby decision boundaries are likely in directions which are not inherently useful for test set classification, i.e. they diverge from the underlying data manifold. More specifically, we argue that margins should be measured in directions of 'high utility', that is, directions that are expected to be useful for characterising a given dataset, while ignoring those of lower utility.In our case, we approximate these directions by defining high utility directions as directions which explain a large amount of variance in the data.We extract these using Principal Component Analysis (PCA).While typically used as a dimensionality reduction technique, PCA can be interpreted as learning a low-dimensional manifold [32], albeit a locally linear one.In this way, the PCA manifold identifies subspaces that are thought to contain the variables that are truly relevant to the underlying data distribution, which the out-of-sample data is assumed to also be generated from.In the following section, we formalize such a measure. Constrained Margins We first formulate the classical definition of an input margin [14], before adapting it for our purpose.For a correctly classified input sample x, the goal is to find the closest point x on the decision boundary between the true class i (where i = arg max k (f k (x))) and another class j ̸ = i.Formally, x is found by solving the constrained minimization problem: with L and U the lower and upper bounds of the search space, respectively, such that for i and j as above. The margin is then given by the Euclidean distance between the input sample, x, and its corresponding sample on the decision boundary, x.We now adapt this definition in order to define a 'constrained margin'.Let the set P = {p 1 , p 2 , ..., p m } denote the first m principal component vectors of the training dataset, that is, the m orthogonal principal components which explain the most variance.Such principal components are straightforward to extract by calculating the eigenvectors of the covariance matrix of the normalized training data, where the data is normalized the same as prior to model training. We now restrict x to any point consisting of the original sample x plus a linear combination of these (unit length) principal component vectors, that is, for some coefficient vector Substituting x into the original objective function of Equation ( 1), the new objective becomes such that Equation ( 2) is approximated within a certain tolerance.For this definition of margin, the search space is constrained to a lower-dimensional subspace spanned by the principal components with point x as origin, and the optimization problem then simplifies to finding a point on the decision boundary within this subspace.By doing so, we ensure that boundary samples that rely on spurious features (that is, in directions of low utility) are not considered viable solutions to Equation (1).Note that this formulation does not take any class labels into account for identifying high utility directions. While it is possible to solve the constrained minimization problem using a constrained optimizer [14], we approximate the solution by adapting the previously mentioned first-order Taylor approximation [16,33], which greatly reduces the computational cost.The Taylor approximation of the constrained margin d(x) for a sample x between classes i and j when using an L2 norm is given by where P is the m × n matrix formed by the top m principal components with n input features. The derivation of Equation ( 5) is included in the appendix (Section C). The value d(x) only approximates the margin and the associated discrepancy in Equation ( 2) can be large.In order to reduce this to within a reasonable tolerance, we apply Equation ( 5) in an iterative manner, using a modification of the well-known DeepFool algorithm [22].DeepFool was defined in the context of generating adversarial samples with the smallest possible perturbation, which is in effect very similar to finding the nearest point on the decision boundary with the smallest violation of Equation (2). To extract the DeepFool constrained margin for some sample x, the Taylor approximation of the constrained margin is calculated between the true class i and all other classes j, individually.A small step (scaled by a set learning rate) is then taken in the lower-dimensional subspace in the direction corresponding to the class with smallest margin.This point is then transformed back to the original feature space and the process is repeated until the distance changes less than a given tolerance in comparison to the previous iteration.The exact process to calculate a DeepFool constrained margin is described in Algorithm 1.Note that we also clip x according to the minimum and maximum feature values of the dataset after each step, which ensures that the point stays within the bound constraints expressed in Equation 1.While this is likely superfluous when generating normal adversarial samples -they are generally very close to the original x -it is a consideration when the search space is constrained, with clipped margins performing better.(See Section A.3 in the appendix for an ablation analysis of clipping.) for j ̸ = i do 4: 5: end for x ← x + γr 10: x ← clip (x) Results We investigate the extent to which constrained margins are predictive of generalization by comparing the new method with current alternatives.In Section 4.1 we describe our experimental setup. Following this, we do a careful comparison between our metric and existing techniques based on standard input and hidden margins (Section 4.2) and, finally, we compare with other complexity measures (Section 4.3). Experimental setup For all margin-based measures our indicator of generalization (complexity measure) is the mean margin over 5 000 randomly selected training samples, or alternatively the maximum number available for tasks with less than 5 000 training samples.Only correctly classified samples are considered, and the same training samples are used for all models of the same task.To compare constrained margins to input and hidden margins we rank the model test accuracies according to the resulting indicator and calculate the Kendall's rank correlation [34], as used in [35].This allows for a more interpretable comparison than CMI.(As CMI is used throughout the PGDL challenge, we also include the resulting CMI scores in Section B of the appendix.)To compare constrained margins to published results of other complexity measures, we measure CMI between the complexity measure and generalization gap and contrast this with the reported scores of other methods. As a baseline we calculate the standard input margins ('Input') using the first order Taylor approximation (Equation 5 without the subspace transformation), as we find that it achieves better results than the iterative DeepFool variant and is therefore the stronger baseline; see the appendix (Section B) for a full comparison. Hidden margins ('Hidden') are measured by considering the output (post activation function) of some hidden layer, and then calculating the margin at this representation.This raises the question of which hidden layers to consider for the final complexity measure.Jiang et al. [11] consider three equally spaced layers, Natekar and Sharma [17] consider all layers, and Chuang et al. [6] consider either the first or last layer only.We calculate the mean hidden margin (using the Taylor approximation) for all these variations and find that for the tasks studied here, using the first layer performs best, while the mean over all layers comes in second.We include both results here.(A full analysis is included in Section B of the appendix.)We normalize each layer's margin distribution by following [11], and divide each margin by the total feature variance at that layer. Our constrained margin complexity measure ('Constrained') is obtained using Algorithm 1, although in practice we implement this in a batched manner.Empirically, we find that the technique is not very sensitive with regard to the selection of hyperparameters and a single learning rate (γ = 0.25), tolerance (δ = 0.01), and max iterations (max = 100) is used across all experiments.The number of principal components for each dataset is selected by plotting the explained variance (of the train data) per principal component in decreasing order on a logarithmic scale and applying the elbow method using the Kneedle algorithm from Satopaa et al [36].This results in a very low-dimensional search space, ranging from 3 to 8 principal components for the seven unique datasets considered. In order to prevent biasing our metric to the PGDL test set (tasks 6 to 9) we did not perform any tuning or development of the complexity measure using these tasks, nor do we tune any hyperparameters per task.The choice of principal component selection algorithm was done after a careful analysis of Tasks 1 to 5 only, see additional details in the appendix (A.1).In terms of computational expense, we find that calculating the entire constrained margin distribution only takes 1 to 2 minutes per model on an Nvidia A30. Margin complexity measures In Table 1 we show the Kendall's rank correlation obtained when ranking models according to constrained margin, standard input margins, and hidden margins.It can be observed that standard input margins are not predictive of generalization for most tasks and, in fact, show a negative correlation for some.This unstable behaviour is supported by ongoing work surrounding adversarial robustness and generalization [23,24,25].Furthermore, we observe a very large performance gap between constrained and standard input margins, and an increase from 0.24 to 0.66 average rank correlation is observed by constraining the margin search.This strongly supports our initial intuitions. In the case of hidden margins, performance is more competitive, however, constrained margins still outperform hidden margins on 6 out of 8 tasks.One also observes that the selection of hidden layers can have a very large effect, and the discrepancy between the two hidden-layer selections is significant.Given that our constrained margin measurement is limited to the input space, there are several advantages: 1) no normalization is required, as all models share the same input space, and 2) the method is more robust when comparing models with varying topology, as no specific layers need to be selected. Other complexity measures To further assess the predictive power of constrained margins, we compare our method to the reported CMI scores of several other complexity measures.We compare against three solutions from the winning team [17], as well as the best solutions from two more recent works [6,30], where that of Schiff et al. [30] has the highest average test set performance we are aware of.We do not compare against pretrained GANs [31].The original naming of each method is kept.Of particular relevance are the M M and AM columns, which are hidden margins applied to Mixup and Augmented samples, as well as kV-Margin and kV-GN-Margin which are output and hidden margins with k-Variance normalization, respectively.The results of this comparison are shown in Table 2. One observes that constrained margins achieve highly competitive scores, and in fact, outperform all other measures on 4 out of 8 tasks.It is also important to note that the MM and AM columns show that hidden margins can be improved in some cases if they are measured using the representations of Mixup or augmented training samples.That said, these methods still underperform on average in comparison to constrained input margins, which do not rely on any form of data augmentation. A closer look In this section we do a further analysis of constrained margins.In Section 5.1 we investigate how the performance of constrained margins changes when lower utility subspaces are considered, whereafter we discuss limitations of the method in Section 5.2. High to low utility We examine how high utility directions compare to those of lower utility when calculating constrained margins.This allows us to further test our approach, as one would expect that margins measured using the lower-ranked principal components should be less predictive of a model's performance. We calculate the mean constrained margin using select subsets of 10 contiguous principal components in descending order of explained variance.For example, we calculate the constrained margins using components 1 to 10, then 100 to 109, etc.This allows us to calculate the distance to the decision boundary using 10 dimensional subspaces of decreasing utility.We, once again, make use of 5 000 training samples.We restrict ourselves to analysing the training set of tasks (tasks 1-5) and consider one task where constrained margins perform very well (Task 1) and one with poorer performance (Task 4). Figure 1 (left) shows the resulting Kendall's rank correlation for each subset of principal components indexed by the first component in each set (principal component index).The right-hand side shows the mean margin of all models from Task 4 at each subset. As expected, the first principal components lead to margins that are more predictive of generalization.We see a gradual decrease in predictive power when considering later principal components.Task 1 especially suffers this phenomenon, reaching negative correlations.This supports the idea that utilizing the directions of highest utility is a necessary aspect of input margin measurements.Additionally, one observes that the mean margin also rapidly decreases after the first few sets of principal components.After the point shown here (index 1 000), we find that the mean margin increases as DeepFool struggles to find samples on the decision boundary within the bound constraints.Due to this, it is difficult to draw any conclusions from an investigation of the lower-ranked principal components.This also points to the notion that the adversarial vulnerability of modern DNNs is in part due to nearby decision boundaries in the directions of the mid-tier principal components (the range of 100 to 1 000). Limitations It has been demonstrated that our proposed metric performs well and aligns with our initial intuition.However, there are also certain limitations that require explanation.Empirically we observe that, for tasks where constrained margins perform well, they do so across all hyperparameter variations, with the exception of depth.This is illustrated in Figure 2 (left), which shows the mean constrained margin versus test accuracy for Task 1.We observe that sets of networks with two and six convolutional layers, respectively, each exhibit a separate relationship between margin and test accuracy.This discrepancy is not always as strongly present: for Task 6 all three depth configurations show a more similar relationship, as observed on the right of Figure 2, although the discrepancy is still present.The same trend holds for all tasks where it is observed (1, 2, 4, 6, 9).It appears that shallower networks model the input space in a distinctly different fashion than their deeper counterparts.For tasks such as 5 and 7, where constrained margins perform more poorly, there is no single hyperparameter that appears to be the culprit.We do note that the resulting scatter plots of margin versus test accuracy never show points in the lower right (large margin but low generalization) or upper left (small margin but high generalization) quadrants.It is therefore possible that a larger constrained margin is always beneficial to a model's generalization, even though it is not always fully descriptive of its performance.Finally, while our approach to selecting the number of principal components is experimentally sound, the results can be further improved if the optimal number is known, see Section A.1 in the appendix for details. Conclusion We have shown that constraining input margins to high utility subspaces can significantly improve their predictive power i.t.o generalization.Specifically, we have used the principal components of the data as a proxy for identifying these subspaces, which can be considered a rough approximation of the underlying data manifold. Constraining the search to a warped subspace and using Euclidean distance to measure closeness is equivalent to defining a new distance metric on the original space.We are therefore, in effect, seeking a relevant distance metric to measure the closeness of the decision boundary.Understanding the requirements for such a metric remains an open question.Unfortunately, current approximations and methods for finding points on the decision boundary are largely confined to L p metrics.The positive results achieved with the current PCA-and-Euclidean-based approach provide strong motivation that this is a useful avenue to pursue. Furthermore, we believe that constrained margins can be used as a tool to further probe generalization, similar to the large amount of work that has been done surrounding standard input margins and characterization of decision boundaries. In conclusion, we propose constraining input margins to make them more predictive of generalization in DNNs.It has been demonstrated that this greatly increases the predictive power of input margins, and also outperforms hidden margins and several other contemporary methods on the PGDL tasks.This method has the benefits of requiring no per-layer normalization, no arbitrary selection of hidden layers, and does not rely on any form of surrogate test set (e.g.data augmentation or synthetic samples). A Constrained margin ablation This section demonstrates the effect of several hyperparameters on the performance of constrained margins.We analyse the selection of the number of principal components, the number of samples, as well as the effect of clipping. A.1 Number of principal components In order to better understand the interaction between the selection of the number of principal components and predictive power, we calculate the mean constrained margin using 1 to 50 principal components for all the development set tasks (tasks 1 to 5).We once again make use of 5 000 samples.However, in this case, the first order Taylor approximation is used to reduce the computational burden.The result of this analysis is shown in Figure 3.We indicate the number of principal components selected by the Kneedle algorithm [36] (applied to the principal components in descending order of explained variance) for each task with a star.The number of principal components reported on per task in the main paper is indicated with a star. One observes that the elbow method selects the number of components in a near-optimal fashion for Task 1, 2, and 4. Furthermore, the optimal number is generally very low, whereafter the correlation decreases.Task 5 (which is the task for which constrained margins produce the lowest performance) behaves in a contrary manner, as the ranking correlation increases as the number of components becomes larger.We find that it only reaches a maximum rank correlation of 0.4 at 270 components (not shown here). In Section 5.1 we had compared the predictive performance of using subspaces of decreasing utility to calculate constrained margins.We now repeat this experiment, but rather increase the size of the subspace up to its maximum (the dimensionality of the input data).This allows us to further verify whether our method of selecting the number of principal components is sound.The result of this analysis is shown in Figure 4 for tasks 1 and 4. One observes that the predictive ability of the constrained margin metric decreases as the size of the subspace is increased until it reaches that of standard input margins, which is well aligned with what one would expect. A.2 Number of samples We have used 5 000 samples to calculate the mean constrained margin for each task (and the same number for all other margin measurements).It is worth determining what effect the number of samples has on the final performance.In Figure 5 we show the Kendall's rank correlation between mean constrained margin and test accuracy for the development set using 500 to 5 000 samples (using the modified DeepFool algorithm).One observes that the rank correlation plateaus rather quickly for most tasks, and one can likely get away with only using 500 to 1 000 samples per model.However, to mitigate any effect that the stochastic selection of training samples can have on the reproducibility of the results, we have chosen to use 5 000 throughout.To this end, we show the number of principal components selected, as well as the number of samples used for each task in Table 3, note that Task 6 and 7 use the maximum number of samples available.It is evident that clipping has little effect on standard input margins -this makes sense, given that samples on the decision boundary are generally very close to the training sample.However, in the case of constrained margins, we observe that clipping improves the results in most cases, and especially so for Task 8.This demonstrates that enforcing the bound constraints is a useful inclusion. B Extended margin comparison This section contains additional results relevant to Section 4.2 in the main paper.We compare using the first-order Taylor approximation to DeepFool, and also the selection of hidden layers. B.1 Comparison of Taylor and DeepFool For constrained and standard input margins, we have experimented with using both the first-order Taylor approximation as well as the DeepFool method to calculate the distance to the decision It is clear that the selection of hidden layers plays a significant role in the overall performance of hidden margins, and we observe a large variation per task between the different methods.While we have used the two best-performing methods as a benchmark to compare with in the main paper ('First' and 'All'), this biases the comparison in favour of hidden margins, as there is no method at present to determine a priori which hidden layer selection will perform best for a given task. C Derivation of constrained margins (Equation ( 5)) This section uses the same notation as defined in Section 3.2 of the main paper.We first describe the standard linear approximation of the margin following Huang et al. [32], before deriving the constrained margin of Equation ( 5) as numbered in the main paper. Any function f can be approximated with its differential at point x using f (x + d) = f (x) + Hd (6) where H = ∇ x f (x) (7) that is, the Jacobian of the output with regard to the input features at point x.We aim to find the smallest ||d|| for some norm ||.|| such that f (x) ̸ = f (x + d), or If we approximate f (.) with f (.), this implies: where ∇ x f k (x) is the gradient vector of the k th output value of f with regard to input x.Then, as shown in [32], the maximum ||d|| will be at: where ||.|| and ||.|| * are dual norms.Specifically, if ||.|| is the L2 norm, then: and Equations ( 11) and ( 12) provide the standard linear approximation of the margin as used by various authors [11,16]. The derivation process for constrained margins is identical -it is only the calculation of the Jacobian that differs, as the gradient is calculated with regard to the transformed features rather than the Let f : X → R |N | denote a classification model with a set of output classes N = {1 . . .n}, and f k (x) the output value of the model for input sample x and output class k. Figure 1 : Figure 1: Comparison of high to low utility directions using subspaces spanned by 10 principal components, x-axis indicates the first component in each set of principal components.Left: Kendall's rank correlation for Task 1 (blue solid line) and 4 (red dashed line).Right: Mean constrained margin for models from Task 4. Figure 3 : Figure 3: Predictive performance (Kendall's rank correlation) as a function of the number of principal components for Task 1 (red circles), 2 (blue squares), 4 (green diamonds), and 5 (yellow triangles).The number of principal components reported on per task in the main paper is indicated with a star. Figure 4 : Figure 4: Predictive performance (Kendall's rank correlation) of constrained margin as a function of the number of principal components for Task 1 (blue solid line) and 4 (red dashed line) calculated using Algorithm (1). Figure 5 : Figure 5: Predictive performance of constrained margins (Kendall's rank correlation) as a function of the number of principal components for Task 1 (red circles), 2 (blue squares), 4 (green diamonds), and 5 (yellow triangles). then Table 1 : Kendall's rank correlation between mean margin and test accuracy for constrained, standard input, and hidden margins using the first or all layer(s).Models in Task 4 are trained with batch normalization while models in Task 5 are trained without.There is no Task 3. Table 2 : Conditional Mutual Information (CMI) scores for several complexity measures on the PGDL dataset.Acronyms: DBI=Davies Bouldin Index, LW M =Label-wise Mixup, M M =Mixup Margins, AM =Augmented Margins, kV =k-Variance, GN =Gradient Normalized, Gi=Gini coefficient, M i=Mixup.Test set average is the average over tasks 6 to 9.There is no Task 3. †Indicates a margin-based measure. Table 3 : Number of principal components and samples used for each task to calculate constrained margins.Tasks 6 and 7 use the maximum number of samples available for the dataset.Our modified DeepFool algorithm (Algorithm 1 in the main paper) enforces bound constraints on the sample by clipping x to stay within the minimum and maximum feature values of the dataset after each step (see line 10 of the algorithm).Since the original images have pixel values between 0 and 1, the z-normalised data has a strict lower and upper bound.Allowing x to deviate outside these values will produce boundaries that cannot exist in practice.Given that the original DeepFool algorithm does not include any form of bound constraints, we analyse the effect clipping has on the performance of constrained and standard input margins.Table4shows the Kendall's rank correlation per task with and without clipping. Table 4 : Kendall's rank correlation between mean margin and test accuracy for constrained and standard input margins with and without clipping. Table 7 : Kendall's rank correlation between mean hidden margin and test accuracy using different hidden layer selections.
2023-08-30T06:42:24.122Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "4297b1d85a8e91182451e5883d7f472427ecefde", "oa_license": null, "oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/29351/30550", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "4297b1d85a8e91182451e5883d7f472427ecefde", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244223599
pes2o/s2orc
v3-fos-license
Cooperation, collaboration and compromise: learning through difference and diversity ABSTRACT Multi-institutional and multi-professional research projects are valued for the impact and learning they generate, but their successful completion is crucially dependent on the various actors recognising their differences and working through/with them as a team. This paper is a critical reflection on one such participatory action research project, which involved new migrants and asylum seekers, an NGO, university researchers, and independent trainers in offering intercultural sexual health and gender relations workshops. It charts the course of this project by introducing the key players and focusing on significant differences and opportunities, and the critical learnings that this generated. The paper uses the concept of the ‘paradox lens’ as a way of understanding emerging dilemmas and tensions, and the subsequent compromises, co-operations and collaborations that ensued. In closing, it offers a set of principles generated from reflections on learning that occurred during the project, and which may be amended and adapted for other contexts and action research encounters that hope to engender collaborative learning. Introduction Partnership between organisations has been increasingly encouraged by funding agencies and research councils as a way of ensuring more responsive, sustainable and multiperspective research outcomes and impacts (see for instance Fransman et al. 2021;Newman, Bharadwaj, and Fransman 2019). Whether between universities, policy organisations and/or practitioners, such collaboration is often seen to be a matter of identifying complementary skills and networks and establishing common goals. But the challenges and potential of negotiating shared objectives across boundaries, whether disciplinary or organisational, are not always straightforward, and are thus themselves the object of study (Trussell et al. 2017;Bjelland and Vestby 2017). This paper is a reflection on the processes involved in one such collaborative research project. As Ashkenas (2015, paragraph 1) notes in relation to interdepartmental collaboration, 'it takes more than people being willing to get together, share information and cooperate. It more importantly involves making tough decisions and trade-offs across areas with different priorities and bosses.' In response, Vangen (2017) proposes adopting a 'paradox lens' on collaboration as a way of addressing areas of tension within management, governance and leadership in multi-organisational collaboration. Suggesting that 'collaborations that have the potential to achieve collaborative advantage are inherently paradoxical in nature', he argues that this is because 'gaining advantage requires the simultaneous protection and integration of partners' uniquely different resources, experiences, and expertise in complex, dynamic organizing contexts' (Vangen 2017, 262). He emphasises the importance of using the paradox construct to enhance reflection in practice. As educational action researchers, we are interested in the analytical value of reflecting on the learning processes and paradoxes that we experienced as we went through the conceptualisation, planning and implementation of a collaborative research project. The literature on multi-organisational collaboration has focused on how to navigate tensions (van Hille et al. 2019), empower communities or build capacity (Rasool 2017). Though these perspectives offer opportunities for reflection and learning, their analysis tends to remain implicit in such accounts. By contrast, our starting point is to investigatethrough micro-level analysis of specific events and practices -how and what kind of learning takes place through the 'paradoxes' of collaboration. This paper sets out to answer the question: how can we engender collaborative learning in contexts characterised by multiple actors and agendas? To do this, it draws on the experiences and reflections from a participatory action research project that aimed to enhance intercultural learning on sexual health and gender relations among migrant communities. The project drew together a diverse set of actors, organisations and professionals with different kinds of expertise, expectations and intercultural experiences. Working together for over a year and reflecting on the different kinds of learning we were engaged in raised critical questions about the processes and paradoxes of collaboration. The contribution of this paper lies in its analysis of the learning encounters and interactions between actors/ institutions rather than just within them. Such a shift emphasises a more complex set of relationships and identities. The paper therefore is not concerned with the immediate intended 'action' of this participatory action research project (and its contribution to intercultural learning on sexual health.) Instead, it looks at a set of critical events and the unexpected insights into collaborative learning that they generated. These have been further developed into a set of principles that engender collaborative learning in contexts characterised by difference and diversity. We hope these could be amended and adapted for other contexts and collaborative projects. Background to the project The university researchers had a long-standing relationship with a non-governmental organisation (NGO) set up to support and empower asylum seekers and refugees in the local area. The NGO involved university students as volunteer mentors, NGO staff ran workshops for volunteers at the university on refugee rights and awareness, and MA course cohorts regularly visited the English language classes at the NGO centre. The NGO coordinator also sat on a committee associated with the University, as part of a more formal institutional relationship, and often liaised with the University admissions department on behalf of refugees who wanted to apply for university scholarships. The research project was set up to enhance intercultural understanding around sexual behaviour and gender relations among migrant, refugee, and asylum-seeking populations. A few years ago, the NGO noted that asylum seekers wanted to learn about cultural assumptions and legal frameworks around sexual abuse and gender relations in the UK. Since then, the NGO has been offering workshops in conjunction with a sexual health charity to address these issues. Later, they approached the university to help strengthen intercultural learning between participants, researchers, NGO staff, and workshop facilitators. They were aware that the sexual health charity had built their training approaches and workshops to respond to the values and practices of 'settled' UK communities and suggested that this was the time to reflect more critically on the appropriateness of this model for refugee and asylum seeker communities from diverse cultures. Having acted as resource persons on a participatory research training day at the university, the NGO staff proposed the idea of initiating a collaborative project with the university using this methodology. The university researchers, for their part, had experience of developing workshops in the Global South using participatory approaches, which could be adapted for participants who had arrived recently in the UK from countries in South Asia, Africa and the Middle East. This project was funded by the university through a scheme designed to accelerate the impact of research, through engaging with local partners. Project design and research cycle In terms of the project coming together, the NGO and university were the early initiators and 'official' partners in the project. Thereafter, two members of staff from the sexual health charity and a trainer from the local council with expertise on domestic violence were invited into the project by the NGO as facilitators of the workshops. Though these facilitators preferred to formally position themselves as working with the NGO's training project, rather than a formal partnership with the university, they played a central role in shaping the research and were active and full participants in all research meetings and activities. During the project, participants' and facilitators' knowledge, views, and experiences of sexual health workshops were explored through a participatory framework, to identify the tools and approaches that would support the particular needs of refugees and asylum seekers. Participatory Action Research (PAR), with its emphasis on reflection and learning for and through action (Whyte, Greenwood, and Lazes 1991), was central to the project design and ethos. PAR, by definition, is collaborative and change-oriented (Manzo and Brightbill 2007) and, at its simplest, involves researchers and participants working together to explore a particular situation or action to change it for the better (Kindon, Pain, and Kesby 2007, 1). The intention of implementing PAR was to eschew a researcherdriven approach more akin to Lewin's early vision of action research (DePalma and Teague 2008), and, instead, require active participation, negotiation and collaboration among practitioners and university researchers in all stages of the research cycle. Taking a PAR approach to the complex and culturally-grounded field of sexual health education offered a means to disrupt more formalised and hierarchical relationships, bringing together facilitators and participants to develop workshop content, for example, and for team members to 'work across axes of difference' (Kesby and Gwanzura-Ottemoller 2007, 71). Such an approach also had the potential to address assumptions based on European constructs of sexuality and sexual health concerns, by sharing knowledge whilst navigating differences in cultural norms. A first step in the project was for the project team to identify and share existing training resources, and to consider how these might be adapted to suit the needs of workshop participants and facilitators ( Figure 1). Then, NGO staff provided training for sexual health charity staff and the domestic violence trainer to inform their approach to working with refugees and asylum seekers. Following this preparation, pre-workshop exploratory sessions were held with participants, designed to elicit their direct input into needs assessment, curriculum and workshop development. From July 2018 separate workshops for men and women were held at the NGO centre, facilitated by the external trainers and NGO staff. The workshops followed a similar structure to previous years, but with a curriculum and approach informed by the preworkshop sessions and ongoing evaluation and learning events. An aim of this participatory evaluation process was to develop a critical lens on these workshops, based on insights from participant observation conducted by two university researchers. These researchers also facilitated focus group discussions and interviews with the workshop participants, facilitators and NGO staff, using participatory and visual methods to facilitate evaluation. Reflections on workshop content, facilitation and participant engagement did not come at the end of the action cycle, but were part of an iterative process, with learning emerging from earlier workshops informing later ones, and further informed by discussion during regular team meetings held at the university or in the city. Using participant observation as a tool within a wider PAR approach, we acknowledge that the meaning of 'participation' differs and blurs, with participant observation generally intended to observe change and participatory research to create change (Wright and Nelson, 1995). An important dimension of the project was to bring research and training approaches developed in the Global South as a resource for organisations working with refugees and asylum seekers in the UK and other countries in the Global North. These approaches were used to facilitate several research activities. The research offered insights into cultural similarities as well as differences, ways of mediating language and meaning, and facilitation as an intercultural encounter. Reflection on these findings led the team to address issues around facilitators' roles and relationships with participants, structure of the workshop sessions, language resources and additional support needs. After implementation and reflection on the workshops, a 'Workshop Guidance' pack was developed and launched at a national conference organised by the project in July 2019. Following this first cycle of action research, which was bounded by the project lifespan, NGO staff continued to adapt the workshops -responding to participants' views and lessons learnt from the research findings. They maintained an interest in extending the cycle and producing further formal evaluation. The challenge of hidden diversity In this section of the paper, we wish to draw attention to the diversity amongst actors within this project, and the implications of this. At the start of the project, we were rightly focused on the diversity of participants in the workshops, i.e. the newly arrived members of the community. There were male and female nationals, ranging in age from late teens to the 60s, both single or married, from countries such as Iran, Pakistan, Syria, Sudan, Somalia, Eritrea, Ethiopia, Democratic Republic of Congo, Iraq, Kurdistan and Sri Lanka. Talking about sexual relations, behaviour and health with a group characterised by such diverse demographics demanded careful planning and thought. The workshops needed to focus on diversity of opinion and practice across cultures, but equally on their differences with UK law and culture, which served as a common reference point. It is perhaps not surprising then, that we were drawn immediately to pay attention to the obvious differences of culture and nationality. But as Ahmed (2000) has noted, we live in times where 'the stranger' remains highly visible -either celebrated as the origin of difference or feared as the origin of danger. Both orientations involve 'stranger fetishism', that is an assumption that strangeness resides in others; that the stranger is a taken-forgranted given, rather than as a concept that is constructed and performed. It was when we were able to confront strangeness/difference as an integral element to the whole team, as something beyond the usual boundaries of nationality and culture, that the various actions of cooperating, collaborating and compromising came to make a useful impact on the team's functioning. As the project progressed, we were confronted with the extent of diversity amongst ourselves as project partners. This diversity encompassed different professional and organisational orientations -we were a group of social science researchers from the university, non-governmental charity workers, national and local service providers. Each of these organisations and professions came with a particular orientation, agenda and purpose. And even within each of our institutions, we drew on diverse skills, expertise and disciplinary bases. For instance, the university team of 5 were drawn from multiple national contexts (English-Nepali, Scottish-Malawian, Indian-British, Turkish and Filipino), with experience of working in different countries, age groups and rooted in multiple disciplines (education, development, gender). The facilitators for the workshops had differing skills and experience of working in sexual health education, and women's health and domestic abuse: most of this expertise was gained through working with local British populations rather than migrant groups. Staff from the NGO were most knowledgeable about the needs and strengths of newly arrived asylum seekers and refugee communities. This depth and breadth of diversity between us meant we simply were not (and could not) be fully cognisant of each other's unique orientations and strengths from the start. As the project unfolded, we began to notice this diversity amongst the team and made adjustments to how we perceived each other and what this meant for the project as a whole. The next section of the paper focuses on specific learning encounters or moments -vignettes -that made us conscious of the differences between us, and how we needed to cooperate, collaborate or compromise to complete the project successfully. Bridging the gap: negotiating differing expectations From the outset, it was evident that the partner organisations each had different expectations from the project, particularly regarding the purpose of the research activities and the final dissemination conference. But all partners had a strong commitment to the support and empowerment of the refugee communities, and this was the thread that bound us as a team. This was set out in the research proposal: 'the direct beneficiaries are the refugee and asylum seekers who will participate in the workshops. They will gain understanding and engage in cross-cultural dialogue about sexual behaviour and gender violence to enable them to better adapt to life in the UK'. We had also discussed and proposed in our funding application, that the partner organisations could benefit in terms of 'developing a training package appropriate for these groups of people, which broadens perspectives on gender and relationships'. The dissemination strategy -particularly holding a national conference -was intended to ensure a wider group of beneficiaries across the UK (including refugee and health education organisations). However, within this broad agenda, we each had different ideas about what the project could deliver, shaped by our varying expectations of 'research' and institutional priorities. The NGO staff saw the research element as akin to an evaluation, which could also provide evidence of good practice. They were keen to collect data before and after the workshops to evaluate how the participants' understanding of sexual health and gender violence had changed through the intervention. This organisation, like others in the voluntary sector, were constantly seeking funds to keep themselves and their services viable. They saw the research as a useful resource for funding bids that would ensure their continuation. In contrast, we as university researchers set out with an agenda of facilitating reflection and change with all partners, as integral to a PAR approach. We consciously positioned ourselves as 'critical friends', to provide an outsider perspective on the workshops as a basis for reflection on what might be done differently. The researchers who were conducting participant observation in the workshops, found they needed to be explicit about their role, to dispel the notion that they were evaluators. By emphasising for instance, that the data would be analysed by the whole team, not just the university staff, as a way of seeking future improvements, the collaborative and action research aspects of the project were constantly foregrounded. As university researchers, we also had instrumental and pragmatic reasons for involvement in the project, such as the need to demonstrate the 'impact' of our research, the basis on which the university had awarded the project grant. In the wider context of the UK higher education sector, the practice of regular assessment of research impact on organisations and communities outside academia to provide 'accountability for public investment' (through the Research Excellence Framework, REF 2021) shaped the university researchers' orientation. The project took shape within such institutional agendas by offering the possibility of being an 'Impact Case Study'. Our different sectoral-institutional perspectives on, and expectations for, the research project emerged particularly when we were discussing the planned outputs of the project. Our proposal had included both academic and practice-orientated activities and outputs. Planned academic outputs included a co-authored research article written by the wider team to disseminate findings and a paper presented at an international education conference, 'to deepen the impact . . . within the UK and internationally' (from the proposal). This very paper itself is something that has greater value to the university researchers than the wider team, being framed by academic discourses (such as the 'paradox lens'). This language is different from the ways in which we talked informally about emerging tensions or different expectations. Although we all critically reflected on our experiences of collaboration in our team meetings, writing about these issues afterwards, for an academic journal, was simply not a priority for the NGO colleagues, despite the authors' original invitation to other members to collaborate in such efforts. They preferred to devote their limited time and resources -particularly stretched during the Covid-19 pandemic -on practical ways of following up on the project outcomes. On the practice side, NGO colleagues actively contributed to writing, feedback and adaptations for the Workshop Guidance pack, including providing insights into principles of engagement -co-created 'recommendations' developed for practitioners. These were all project outputs to be shared with other NGOs for informal feedback and presentation at the national conference. The NGO partners planned to use the action research findings to revise the workshop guidance for future sessions. As the two university researchers conducted participant observation during the workshops, informal discussions with the facilitators were combined with more formalised presentations of the findings framed around 'critical questions'. They discussed how the workshops might be adapted and how to revise the training package. In our team meetings at the university, the focus was more on the formal proposed output and NGO staff were keen to produce a training manual, which could be launched at the national conference. As university researchers we were cautious about preparing a 'manual' in case the workshop activities would be simply replicated or transferred to other contexts. Alternative terminology such as a 'draft manual' that participants at the national conference could contribute to and adapt, were also discussed. Finally, we decided to develop and publish a 'Workshop Guidance' in time for the conference. This was a compromise: a finished product that could be launched, but could also be framed as 'guidance' with suggestions for other organisations to adapt. It included exemplars in the form of 'activity banks' rather than a formal 'how-to' curriculum and included blank pages at the back for adding further activities and ideas. Producing such workshop guidance involved much learning on the part of the university team: in terms of writing in a more accessible and less academic style, whilst also including the critical questions and issues that had been the source of the whole team's learning during the project implementation. It was a challenge to combine these two perspectives (critical research reflections and practical 'how to' advice), a potential source of tension when we came to prepare the national conference programme. After the conference, the university researchers began to prioritise the proposed academic outputs. A team consisting of two university researchers and an NGO facilitator presented a paper at an international conference that reflected on the process of collaborating across institutions. The wider project team also presented findings at a university seminar and at a workshop at the County Council. The university researchers wrote a final report on these outcomes to the funding body and considered the project to be at an end, having produced the promised outputs. However, for the NGO, the work was not bounded by the deadlines and resources of the initial grant. They were keen for us to continue the research collaboration and pointed out that the new approach to the workshops had only just been implemented. The university researchers were invited to come and observe the process again and collect more feedback from the facilitators and participants. However, there were real time and resource constraints on such involvement once the university funding was over. In the end, two volunteer researchers were found (one was a university researcher who agreed to continue on a voluntary basis) to support this last phase, which has been more in line with the NGO's objective of an evaluation study. As a university team, we continued to be involved with the NGO's work informally. Attempting compromise -participant-led content versus expert-led knowledge Early tensions around different actors' understanding and uptake of the participatory nature of the research emerged during activities to support the development of a curriculum for the women's sexual health workshops. During a pre-workshop session, university researchers planned to employ creative, participatory methods to provide an opportunity for the women themselves to identify their needs, and shape future workshop content. This reflected the understanding underpinning the project -that participants' needs and preferences would be placed first when establishing workshop goals and objectives. Researchers drew up a protocol that included a focus group discussion with women from the NGO's English classes, who had been invited to join the workshops. The purpose of the focus group was to elicit the women's perspectives on their lives in the UK, relationships and sexual health challenges and access to services, and to learn what additional information needs they had. This group discussion was to be followed by a participatory pair-wise ranking exercise (Narayanasamy 2009), which involved participants sorting and ranking these identified needs into their relative importance, using hand-written cards or symbols. The exercise was to act as a prompt to discuss the reasons behind participant's choices regarding the various needs' importance. Adaptions of this visual 'draw-and-write' activity had been used previously by one of the researchers in curriculum development activities for non-formal education programmes in Malawi, including sexual health and HIV education, and had worked well with diverse groups with differing levels of literacy. The facilitators were more familiar with another activity, 'Diamond Nine', a card-sorting activity used in UK education settings (cf. Clark 2012). Diamond 9 differs from pair-wise ranking in that the cards used are already populated with pre-selected topics. So, although the Diamond 9 exercise allows participants to consider the relative importance of topics, the use of pre-written cards restricts their ability to choose their own topics. A suggestion that some blank cards be included, to allow women to write (or draw) their own choice of topics, was not taken up. On the day of the women's workshop, both activities took place, although this 'spirit of compromise' risked a longer, and potentially tiring, session for the participants. The pairwise ranking activity took place last (facilitated by the university researcher), following an introduction by NGO staff that included reference to a range of sexual health topics, and the Diamond 9 activity. This initial introduction to specific topics may well have preempted women's perspectives and influenced their choices. Not surprisingly, many of these topics were later suggested by the women in the pair-wise ranking activity. While both activities ranked topics on where to find help/services as most important, the Diamond 9 activity saw issues of sexual rights, legal issues and consent rank highest, whilst during the pair-wise ranking activity, women also ranked emotional issues and relationships highly, perhaps reflecting their own concerns more closely. Using the pairwise ranking activities proved additionally helpful as drawing was an effective way to overcome language barriers, whereas the words used on the cards for Diamond 9 needed to be explained in advance. When introducing the list of possible workshop topics at the start of the session, it quickly became clear that much of the terminology relating to sexual health and relationships was unfamiliar to the participants. This initial constraint was mitigated somewhat when one participant became a de-facto translator for others. Other terms remained unfamiliar, overly formal and outside the 'day-to-day' of women's knowledge. This requirement of women to decode and adopt the terminology of the sexual health experts illustrates the limits of a top-down approach to needs identification. Reflecting on this, the researcher observing the session suggested that facilitators consider including an introductory activity to unpack these terms and provide a visual 'wall' of definitions within the workshop space. This suggestion was indeed adopted during later workshops, and it proved popular with participants. Several additional topics were suggested by the women at the end of the activities, during a less structured, final 'wrap-up' session. Through this we learnt that such informal spaces for discussion were important in supporting knowledge sharing and allowing women's voices to be heard, and we were challenged to consider whether such less structured activities were actually just as effective a way of finding out their needs. Whatever the means, opportunities to express their views were valued by participants. One woman stated, When you told us that we are deciding the topics, I felt happy someone is hearing and caring for us. This vignette, from early in the process, illustrates a paradox: how university researchers' desired use of participatory research techniques to drive a bottom-up approach to needs identification was at odds with facilitators' planned use of previously crafted sessions designed to ensure that key aspects of sexual health education were not missed out. By bringing in the women as active participants in the process of workshop planning, the facilitators gained insight into the relative importance of various topics in the context of the women's lives. By combining these with more non-negotiable content (for instance, in sharing specific UK laws and regulations on consent, rape, domestic and gender violence), the workshops ultimately bridged the gaps between intentions for the workshops as understood by different team members. Spaces for collaboration in workshop facilitation During the workshop sessions, the facilitators were confronted with the differences between their approaches and the complexity of delivering sessions for a highly diverse and multicultural group. The women's workshops were run by two facilitators (from the sexual health charity and the local council), who were both attending and supporting each other's workshops. In the men's group, the lead facilitator was a trainer from the sexual health charity who had years of experience conducting sexual health workshops with British youth. The co-facilitator was a member of the NGO staff who had been working with the men's group participants in other capacities (advising on asylum applications, organising football games) for about four years. Between them, there were noticeable differences in terms of facilitation style, knowledge of the topic and relationships with the participants, which they were able to bring together in a complementary way. The lead facilitator focused on delivering from a pre-set curriculum drawing on his expertise on UK laws on consent and sexual offense, and the science of sexually transmitted disease spread. Participants often considered the lead facilitator as an 'expert' who could accurately answer queries. The co-facilitator drew on his strong relationships with the men's group participants developed over the years as their mentor and confidante. For instance, he knew which participants were comfortable sitting beside each other. He could skilfully capture and re-phrase some participants' speech when they attempted to speak in English; they shared in-jokes and a similar sense of humour. The various strategies for the workshops were born out of the partnership between the two facilitators. As a duo, they had developed a certain dynamic, and created a friendly, open environment where honest and difficult conversations around sex, consent and gender relations occurred. However, they also expressed, in subsequent interviews, that the workshops would feel different every single time, particularly because the format, participants and topics would change every year. This fluidity of the sessions seemed to have given them an opportunity to learn from each other. During workshop breaks, they would speak to each other and informally evaluate the sessions that came before. Moving away from the tradition of seeing university researchers as evaluators, these two facilitators would sometimes ask the opinion of the researcher whose task was to observe and document the session. In these fleeting moments of collaborative dialogue, they were quickly appraising, redesigning and (re)strategizing workshop content and approaches in real time. Another important aspect of such collaborative and informal learning was their ability to change the workshop format and activities in response to participants' needs and interests. These attempts went beyond the project duration. For instance, one of the issues that emerged from the project was the limited interaction between male and female participants when there were sexual health concerns that were relevant to both. A year after the project, the facilitators (for the men and women's groups) collectively decided to schedule the workshops on the same day -the males in the morning and the females in the afternoon so the two groups could interact over some shared lunch. The two groups did not interact as envisaged, and the facilitators accepted the practices and desires of the community members not to engage in conversations around sexual health in a mixed setting. In a way, the collaborative and developmental ethos of the project may have reframed these workshop days less as structured and formal, and more as fluid and responsive to the needs of the group members. The NGO co-facilitator in the men's group expressed how planning for and implementing the workshops over the years had also contributed significantly to his growth: . . . in the first year I did give a lot of my own opinion . . . . we did not really discuss fully how I should go about facilitating it . . . I probably shouldn't have done that as a facilitator. It's a kind of natural thing when you're having a discussion. But really the workshop is designed to make them think for themselves, develop their own opinions. This excerpt illustrates how, over the duration of the project, the co-facilitator articulated a different understanding of his role and an explicit recognition of the importance of collaborative dialogue in designing effective workshops. The lead facilitator described these workshops as 'nothing like I have done in the past'. He had been compelled to adjust and relearn his facilitation process (built by working with British youth) for a multicultural group drawn from countries and cultures that he had not encountered before. In one session, on the topic of marriage, one participant began sharing information about the dowry system in their country. The lead facilitator was visibly surprised and taken aback by the information. He later shared that it was in moments like these that he continued to learn from the participants. These observations demonstrate how, in the span of the action research cycle, the workshops became much less facilitator-determined. While there were parts that were more akin to a lecture (when introducing UK laws), much of the workshop worked as a targeted conversation about particular topics reflecting participants expressed needs. This also led to participants sharing different aspects of their culture. Such exchanges generated new insights and expanded previously held ones on how sexual health is practiced and talked about in various contexts. These examples also show that the project -through its emphasis on collaborative learning -not only raised awareness but also facilitated intercultural learning that led to concrete changes. Co-operation and collaboration on the national symposium The power of the collaborative relationships between the project partners became more apparent when each of the actors were able to contribute in a way that allowed their strengths to be exploited. The myriad decisions and actions that needed to be taken towards organising the symposium offers us one such significant moment. The symposium as a whole was meant to increase the impact of the project and its contribution to a wider population. It stayed loyal to the participatory nature of the project, including workshop participants amongst the delegates and planning a range of interactive sessions. After much debate and attempts to secure a venue away from our home turf, the NGO was able to secure the ideal venue for the day. Not only were the costs relatively inexpensive (for central London), the location with its main hall, break out rooms, and garden was ideal for delegates travelling from afar and for the activities planned for the day. Secondly, delegates and organisers were able to enjoy delicious and nutritious food supplied by a catering collective of migrant women, which tied in with the whole ethos of the project. Thirdly, being able to call on an appropriate, high-profile key-note speaker who was supportive of the symposium, and able to attract practitioners from relevant organisations working with migrants/refugees and asylum seekers allowed for better dissemination. Each of these elements was made possible through the NGO staff and their knowledge and networks in the wider community connected to supporting newcomers to the UK. The university researchers, for their part, were able to draw on previous experience of running symposia designed to encourage participant interaction. The day was thus divided into several sessions that allowed participants to exchange knowledge and mingle with other participants. These sessions included the use of breakout group workshops with project team members sharing a particular activity from the activity bank of the Workshop Guidance. Each breakout group also included some of the participants from the original workshops. The focus of these group sessions were to (i) share the experience of the project team, (ii) draw on the expertise of the delegates and their experiences while discussing and reflecting on the activity and (iii) look for improvements or amendments to the activities. In doing so, the project team hoped to demonstrate that these activities were not a template to be followed, but a guidance to be adapted to different contexts and populations. The format of a World Café, where participating delegates were given 5 minutes each to present a slice of their organisations' work to the conference followed by a brief question and answer session meant that the project team did not have to play the role of 'experts' delivering training to delegates. By bringing together our different strengths and expertise, the symposium allowed us moments of genuine collaboration and co-operation. Discussion The vignettes offer insights into the continuous processes of collaboration, cooperation and compromise in this PAR project. We originally anticipated challenges around 'being participatory' in relation to the micro level of the workshop planning and content, and for this reason had introduced participatory tools such as pair-wise ranking to make a space for participants to have a voice. However, as the project developed, we became increasingly aware that the question of 'whose participation counts' was equally relevant to us all as project team members. We increasingly discovered and negotiated different goals, identities, and organisation cultural values within our small team. Whilst such diversity can be seen as a resource -and indeed, recognition of our complementary skills had drawn us together as a partnership between university and NGO initially -it could also become a source of tension. As academic researchers, we are conscious of how PAR extends the ethical principle of respect, with all participants obliged to recognise that their peers and co-researchers have a right to a voice and a valuable contribution to make (Manzo and Brightbill 2007). In line with the theoretical underpinning of this paper, this also presented a paradox: whilst keen to bring the voices of other team members to the fore, including in the crafting this paper, we learnt to accept that this was not always what they wanted, or needed. Turning to our opening discussion in this paper, Vangen's 'paradox lens' emphasises the importance of working with, rather than downplaying, paradox and acknowledging contradictions or tensions: 'there is a need to embrace the existence of paradox while simultaneously accepting that in practice, some kind of resolution is required insofar as enabling agency is concerned' (Vangen, 2017, 266). Taking Schad et al.'s (2016, 6) definition of paradox as 'persistent contradiction between interdependent elements', Vangen argues that it is not only the similarities between member organisations' goals that influence the success of a collaboration, but also the differences: 'differences in goals also facilitate collaboration as this implies greater synergies from diversity of resources' (ibid, 265). Reflecting on, for instance, our project symposium in London vignette d, this event made particularly visible the different strengths that each partner brought to the collaboration. However, in the planning process, we had also become aware of our different objectives and ideas about what the symposium should set out to achievethe NGO seeing it as a 'training day' and the university researchers believing it to be the main research 'dissemination' activity of the project. Whilst the two objectives were not necessarily in opposition, they influenced who we invited and the format of the programme. Our discussions about the organisation of this symposium could be seen in terms of recognising and balancing any possibly conflicting agendas. Reflexivity was central to this process, and, as Vangen suggests, rather than negating paradoxical tensions, it is about 'asking questions (being reflexive) with respect to how tensions are managed' (Vangen 2017, 267). Reflexivity could be seen as a certain kind of informal learning facilitated through participatory action research projects such as ours. When we look back at the process of implementing this project, we are struck by the different kinds of learning that we engaged in. Through working with the NGO, the university researchers learned above all about the fragility of the voluntary sector in terms of insecure and short-term funding. A continuing desire to frame the action research as 'evaluation', rather than as professional development or even community empowerment, was linked to NGO staff's experience of using an evaluation study to secure future grants. Their jobs and the support provided to refugee communities was dependent on such income, and our project was taking place in that context. By contrast, the university researchers implemented the research project as just one part of their job and took for granted the time-bound nature of the funding (only for one year) and the necessity of producing academic outputs. Coming together for the project meant that we began to understand how we differed as a team of non-governmental, council and university employees, particularly how our objectives and practices were being shaped by our institutional agendas. For others, the collaborative nature of the project and its emphasis on partnerships changed the way they view academic-NGO relationships in general. For instance, during our conference presentation, the NGO facilitator shared that he once worried about being part of this project because, in his experience, research tended to be extractive -getting data from NGOs, but not collaborating with them to develop potential solutions. Through this PAR project, he had appreciated that we were all learning together and attempting to improve practice, even in a limited timescale. The overall experience of the project pointed to the importance of learning and accepting differences in perspectives and agendas, and also of building on each organisation and individual's prior experience and skills. As Kesby and Gwanzura-Ottemoller (2007, 78) observe, 'Engaging with resistance productively, rather than being frustrated by it, will ultimately help strengthen PAR projects'. Our common commitment to the communities with whom the NGO worked was also a key factor in deciding when compromises needed to be made. In this respect, both the university researchers and NGO staff shared a recognition of the limitations of a project like this, especially in terms of how far it could address the deeply embedded structural inequalities that affect migrant women and men. As Vangen suggests, such understanding is integral to the process of collaboration: 'the acceptance of the paradoxical nature of collaboration, with its intrinsic tensions, can ultimately lead to consideration of realistic rather than idealistic expectations of what can be achieved ' (2017, 271). Reflections and implications for practice To conclude the paper, we set out reflections on the collaborative learning that underpinned the action research process. Some of these ideas emerged from principles that were part of our thinking from the start of the project. Others emerged during the course of the project, and yet others remain aspirational. We share these here, not as specific recommendations, but as principles for multi organisational groups and collaborators working across the academic-practitioner divide to consider, amend, change and develop further, within the contexts of collaborative learning, diversity and practice. (1) From the outset, we committed, as a team, to recognising and sharing different kinds of knowledge and experience, without ranking their importance or status, and giving equal value to all. Researchers, practitioners and participants each brought different expertise into the project. Through the project we gained important insights into each other's skills and learnt how many can be harnessed in complementary and innovative ways. (2) Though coming from very different starting points, participants' learning needs were central to our conversations and critical questions when establishing and reflecting on workshop objectives. The use of participatory means to garner workshop participants' perspectives and preferences deepened understanding of these needs and provided bespoke and adaptable learning moments for both facilitators and participants. (3) We learnt that acknowledging and working through tensions requires commitment to making space to listen and to hear diverse views and positions without judging them. Team meetings, whether at the university or, less formally, in cafés in the city centre were valuable opportunities to learn about each other's expectations and perspectives on the progress of the project. Integrating frequent spaces for reflection and learning into the action research cycle enhanced moments of understanding and genuine collaboration. (4) We recognised that our interactions and learning encounters needed to be situated and understood within a more complex set of relationships and identities, reflecting different institutional cultures, concerns and values. As the project evolved, we saw a number of different roles emerge -as presenters, translators, participants, contributors, facilitators, researchers, organisers, learners. These were not all necessarily decided upon from the start, and we learnt that they are open to different members of the group at different times. A participant may be a translator, a contributor or learner, for example, while researchers may 'step-in' to facilitate and NGO staff take up knowledge sharing and research dissemination activities. Understanding and celebrating the diversity in roles added to the richness of the research process and outcomes. (5) Learning occurred when these different roles and expertise, individuals and institutions, came together, albeit temporarily. We acknowledge that all of us experienced learning and, to various degrees, expanded our understanding and knowledge. Identifying the ways in which that learning and co-evolving occurred was an ongoing process. Whilst as researchers, we saw this as framed by the project cycle, we learnt that, for practitioners, learning was not bounded in this way, but rather continued to evolve and inform their work beyond the life of the project. (6) We realised that learning events can be uncomfortable, conflicting or even destabilising. They may fall short in their aim to empower participants or bring surprising expectations and unintended outcomes. We recognise that these disruptive moments are part of learning. (7) In our discussions, as a team, we were also aware that our small-scale project cannot change embedded structural inequalities and we needed to acknowledge our limitations within wider policy and social environments. (8) The learning from the project was opened for sharing across communities and organisations and genuine collaboration required opportunities to engage with all participating in the project.
2021-10-19T16:00:04.743Z
2021-09-23T00:00:00.000
{ "year": 2023, "sha1": "a3bed207f317c8e3c4686d0a1f89b2ce3e843fbe", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09650792.2021.1970604?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f341a4d5b360315623b790dd5443a511b13ad729", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
260446886
pes2o/s2orc
v3-fos-license
The association between handgrip strength and depression in cancer survivors: a cross-section study Background The association between handgrip strength and depression in cancer survivors was unknown. We aim to examine the associations of handgrip strength and depression in cancer survivors by using public data (National Health and Nutrition Examination Survey). We combined two waves of the data of the National Health and Nutrition Examination Survey during 2011-2014 to explore this important issue. The variable of handgrip strength was extracted by the maximum value of each hand. The depressive symptom was assessed by the Patient Health Questionnaire (PHQ-9) with the cut-off >=10 points indicating depressive symptoms. Other characteristics and health-related variables were evaluated. Multivariable logistic regression models were adopted to explore the associations between handgrip strength or low handgrip strength and depressive symptoms by adjusting for potential confounding factors. Introduction The number of cancer survivors is increasing thanks to the variety of multiple advances of technology and medicine and the aging society 1 . Meanwhile, cancer survivors were often suffering depression symptoms, with an estimated gure of 20%, as compared to 5% in the general population 2 . A metaanalysis reported that the prevalence of major depression and minor depression was 15% and 20%, respectively 3 . Cancer patients with depression could have an increased risk of adverse outcomes, such as poor adherence to medical treatment 4 , lower survival time 5 , even more likely to suicide 6 , bringing tremendous attention to clinicians and society. Therefore, early identi cation of risk and corresponding management for depression among cancer patients was essential. Several factors, serious illness, female gender, social deprivation, and other health-related factors, are associated with depression 7 . Apart from these abovementioned factors, functional limitations such as low handgrip strength are becoming the hotpot topic for the researcher to explore with depression 8- 10 . A previous study indicated that muscle mass could in uence depression by secreting myokines 11 and muscle mass as the main component of organ to maintain the function of physical activity. Apparently, regular physical activity was associated with decreased risk of depression 12 . Handgrip strength, as an indicator of the function of muscle mass, can be easily obtained by dynamometer, widely applied in different settings. In addition, handgrip also used a parameter to assess the nutritional status of patients. The association between handgrip and depression has been widely explored among community-dwelling older adults, with the results being that older adults with high handgrip strength had a lower risk of depression symptom 13,14 . Furthermore, in a recent meta-analysis, handgrip strength was associated with depression (Pooling OR = 0.85,95%CI:0.80,0.89), indicating handgrip strength as a protective factor for depression 15 . However, most of the including studies for meta-analysis were conducted among community or nursing home settings where participants were relatively healthy. In contrast, the association between handgrip strength and depression among cancer patients was unknown. Therefore, the aim of this present study was to explore the association between handgrip strength and depression among cancer patients and examine the association of low handgrip strength based on the de nition of sarcopenia with depression by using the public internet database of the National Health and Nutrition Examination Survey. Methods Study design and participants. This cross-sectional study of National Health and Nutrition Examination Survey (NHANES) was to explore the overall picture of nutrition, health, and risk factors among residents in various states across the USA (Centers for Disease Control and Prevention-http://www.cdc.gov/nchs/nhanes.htm). This survey is based on national-scale adopting multistage and clustered sample methods with the gure of being 5000 participants each year. Voluntary participants were asked to nish physical examination and this study was approved by Health Statistics Research Ethics Review Board from national center. All the participants have signed the written consent. In this present study, data including handgrip strength, depression, sleep disorder, cancer type, baseline characteristics, and other health-related variables were extracted from NHANES ranging from 2011 to 2012 and 2013 to 2014, aggregating for nal analysis. Cancer diagnosis Cancer patients was con rmed by asking about the question (Have you even been told by the doctor that you had been diagnosed with any type of cancer). We selected the patients whose answer is "YES." Handgrip strength The detail of muscle strength measurement was depicted in NHANES Procedure Manual. Brie y, those participants who are able to nish the test obeyed the standard procedure. Investigators explained the detailed information and asked participants to squeeze the dynamometer as hard as possible for three times, recording the maximum value as the participant's nal handgrip strength. In our present study, the de nition of low handgrip strength was <27kg for males and <16kg for females 16 . Depressive symptoms According to the Questionnaire Instruments, Patient Health Questionnaire (PHQ-9) was used to assess the depressive symptoms of participants, consisting of 9-items with each item ranging from 0 to 3 points 17 . The total points of PHQ-9 ranged from 0 to 27. We categorized participants into depression and nondepression, with the cut-off being 10 points based on the previous study, which was reported good sensitivity (88%) and speci city (88%) for identifying major depression 18 . Covariates de nition We extracted demographic characteristics including age, gender, education, race, marital status, and smoking. Of which, the race was de ned as Non-Hispanic Black, Non-Hispanic White, and others; marital status was classi ed as married, widowed or divorced, and others. Regarding of education, four categories including less than 12 grades, high school, some college, and college graduated above were con rmed. Other covariates such as BMI, cancer diagnosis, sleep disorder, history of stroke, and history of Congestive Heart Failure (CHF) were also extracted. In addition, Leisure-time physical activity was assessed by Global Physical Activity Questionnaire 19 . The detailed calculation method was reported the previous study. In brief, we combined the vigorous and moderate intensities of physical activity each week, and we de ned it as inactive and physical activity, respectively, with the cut-off of zero min/week. Statistical analysis Continuous variables including age, BMI, Depression score were present as means and SD, and categorical variables including diagnosis, gender, race, education, marital status, sleep disorder, and other health-related variables were displayed as frequency (%). Comparisons for low handgrip strength versus normal handgrip strength and depression versus non-depression were conducted by Student's-test and chi-squared test or Fisher's Exact Test and Mann-Whitney tests, for appropriate. In addition, the generalized additive model (GAM) analysis was used to detect whether there is a non-linear relationship between handgrip strength and depression 20 . Before performing multivariable logistic regression analysis, we used Least Absolute Shrinkage and Selection Operator (LASSO) regression to select variables for the nal regression model. The results indicated that these variables (age, race, education, marital status, sleep disorder, history of stroke, polypharmacy, BMI, handgrip strength) were selected. Finally, Multivariable logistic regression analysis was adopted to identify the independent association between handgrip strength and depression after controlling for potential confounding factors including age, race, education, marital status, sleep disorder, history of stroke, polypharmacy, BMI. We also categorized handgrip strength into low handgrip and normal handgrip according to the sarcopenia of the revised European consensus on de nition and diagnosis, with the cut-off value being <16kg for females and <27kg for males. The association between low handgrip strength and depression was also detected by multivariable logistic regression analysis with adjustment of the same variables. Unadjusted subgroup analysis between low handgrip strength and depression was performed in terms of different variables( inactive versus active, >=65 versus <65, sleep disorder, congestive heart failure, polypharmacy, history of stroke, marital status, race, and education.) All the statistics were conducted by software packages R and Empowerstats, with the signi cant P-value being <0.05. Overall, cancer patients with low handgrip strength were more likely older, had poor sleep, and were less likely to participate in physical activity. In addition, the proportions of depression and polypharmacy were higher in cancer patients with low handgrip strength as compared to those with normal handgrip strength. Those patients who suffered a history of stroke or congestive heart failure tended to have low handgrip strength. (Table 1) Univariate Analysis For The Factors Related To Depression The results of univariate analysis showed that female, low handgrip strength, sleep disorder, stroke, and polypharmacy were associated with depression. In addition, cancer patients who were not married were more likely to have a risk of depression. Other related variables with depression were displayed in Table 2. Non-linear Relationship Analyses The generalized additive model (GAM) analysis was adopted to detect whether there is a non-linear relationship between handgrip strength and depression and the results suggested the association between handgrip strength and the risk of depression was negative, meaning that with the increase of handgrip strength, the possibility of depression was decreased. (Shown in Fig. 1) Subgroup analysis between low handgrip strength and depression in terms of different variables. The subgroup analysis showed that the association between low handgrip strength and depression among cancer patients was almost unchanged in various stratum, indicating that this was reliable and stable ( Fig. 2) Discussion This present study showed that cancer patients with low handgrip strength had an increased risk of depression as compared to those with normal handgrip strength, implying cancer patients need to take measures such as resistance training and nutritional program to improve handgrip strength, eventually reducing the risk of depression among cancer patients. To the best of our knowledge, this is the rst study to explore the association between handgrip strength and depression among cancer survivors. Many studies had examined the association between handgrip and depression among the community-dwelling population. In a cross-sectional study consisting of 24,109 Chinese adults (41. 5 ± 11.9 years), the authors found that participants with higher handgrip strength were associated with lower risk of depression and this association was particular in female 21 . Furthermore, another prospective cohort study conducted in rural Chinese populations, reporting the reversely association between handgrip and depression 22 . Our study was in line with these abovementioned previous studies. However, most of these studies were performed among relatively healthy people. Only a few studies focused on hospitalized patients. A study in 2020 was reported this association among participants suffering from different chronic diseases with inconsistent results 23 . The results found that those participants with high strength tertile had decreased depression both among people with no disease or metabolic diseases; however, this association was not observed in patients with arthritis diseases, which need more study for further exploration. Our study focused on a special population, cancer survivors, which are prevalent of suffering from depression 24 . Additionally, co-morbid depression had an adverse impact for treatment and recovery for cancer patients. Therefore, early prevention and treatment for depression were essential for cancer survivors. Prior studies mainly focused on other aspects of risk factors for depression, which consisting of social factors (family, social support, stressful life event), characteristic of cancer (type of cancer, recurrence, prognosis), cancer treatment (radiotherapy, chemotherapy, treatment burden), individual characteristics (age, gender, marital status) and psychological response to diagnosis. To the best of our knowledge, no study explores the handgrip strength, a modi able parameter, with the association of depression. Obviously, handgrip strength had many merits, simple, convenient, and not time-consuming, compared to these abovementioned factors, which were widely used in clinical settings and other primary community healthcare 25 Therefore, cancer patients with low handgrip strength, hardly participating in physical activity would not gain the bene cial effect for reducing depression producing by exercise. Third, some studies reported that a muscle-brain crosstalk can be medicated by myokines and metabolites, which were secreted by muscle, playing a role in regulating hippocampal function that was closely related to depression 11,30 . Although there were some possible reasons for the association between low handgrip strength and depression, future studies were warranted to explore this underlying mechanism. This study possessed strengths and drawbacks. First, based on our knowledge, this is the rst study to examine the association of handgrip strength and depression among cancer survivors, which was a fundamental issue for the prevention and management of depression among oncology Patients. Second, our study suggests that by using a simple and convenient dynamometer to measure handgrip strength, clinicians can identify the high-risk group of depression. Given the characteristic of modi cation of handgrip strength, appropriate and personalized exercise and nutritional programs are bene cial for reducing the risk of depression. Third, our study used comprehensive statistical analysis such as Lasso regression, by using shrinkage, to better select variables for minimizing multicollinearity. However, some drawbacks need to be mentioned. First, the feature of cross-section study limited the ability to identify a completely causal association, which needs more prospective cohort study for further exploration. Secondly, the NHANES study database did not provide some other important information about cancer treatment (radiotherapy and chemotherapy), length of time of rst cancer diagnosis, and advanced disease stages, which might overestimate or underestimate the association between low handgrip strength and depression. Our study indicated that cancer survivors with low handgrip strength had an about 2.1-fold risk of depression, implying that clinicians need to perform corresponding interventions (physical activity and nutrition) to improving handgrip strength with the bene t for reducing depression.
2021-10-18T18:26:17.760Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "81432c9b33490c1c984dd848f1badb250894ba88", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-022-02795-0", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "456c4ab113b2b21d86bfd48bd77ba19877762fff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233566462
pes2o/s2orc
v3-fos-license
The impact of seasonal sulfate–methane transition zones on methane cycling in a sulfate‐enriched freshwater environment Lake Willersinnweiher located in south‐western Germany is a small eutrophic gravel pit lake fed by sulfate‐enriched groundwater. The aim of this study was to investigate the total methane (CH4) mass balance of Lake Willersinnweiher with a particular focus on the interaction of carbon and sulfur cycling within the lake sediments and the redoxcline of the water column. Our results show that Lake Willersinnweiher permanently releases CH4 to the atmosphere throughout the whole year 2018 at rates ranging from 5 to 120 mol d−1. Sediment data show the presence of intense anaerobic oxidation of CH4 in the upper sediment layers during early summer. Here, CH4 is most likely consumed via sulfate in sulfate–methane transition zones (SMTZs) that have been observed for a few specific freshwater environments only. Seasonal dynamics in biogeochemical processes trigger the non‐steady state conditions within the sediments and the CH4 consumption in the SMTZs. In parallel, CH4 released from the sediments is completely consumed by aerobic oxidation processes in the redoxcline indicated by minimum CH4 concentrations with high δ13C–CH4 values. This zone acts as an effective barrier, minimizing CH4 release into the surface water and the atmosphere and thus CH4 oversaturation along with near‐atmospheric isotopic composition indicate the presence of an additional CH4 source in the epilimnion of Lake Willersinnweiher. The emission of the greenhouse gas methane (CH 4 ) from freshwater lakes has been suggested to play a substantial role in the global methane budget (e.g., Bastviken et al. 2004). Here, significant amounts of CH 4 are emitted, even though CH 4 produced in aquatic systems is largely consumed by anaerobic and aerobic methanotrophs (up to 30-99%; Bastviken et al. 2008). The amount of emitted CH 4 is thereby depending on bioproduction, degradation and mineralization of organic substances in the sediments and the water column of the freshwater lake. Carbon mineralization in anoxic lake sediments is affected by manganese (Mn) and iron (Fe) reducing bacteria metabolizing competitive substrates, outcompeting CH 4 forming microorganisms (methanogens) within the upper sediment layers (e.g., Whiticar 1999). Sulfate (SO 4 2− ) reduction often has minor implications on organic matter degradation due to low SO 4 2− availability in most freshwater environments (Holmer and Storkholm 2001), so that SO 4 2− is rapidly depleted with sediment depth and SO 4 2− reducing bacteria become inactive or are absent. As a consequence, methanogenesis is the most important process in overall carbon mineralization in anoxic lacustrine sediments (e.g., Rudd and Hamilton 1978). Competitive metabolisms are not only affecting methanogenesis, but also consumption of CH 4 by bacteria (methanotrophs) in the sediments. The anaerobic oxidation of methane in the sediments of lakes is usually coupled to the reduction of the energetically more favorable electron acceptors, such as nitrate and nitrite (e.g., Raghoebarsing et al. 2006) or Fe(III) and/or Mn(IV) (e.g., Beal et al. 2009). Methanotrophy coupled to SO 4 2− reduction is a common feature in marine sediments due to high pore-water SO 4 2− concentrations, but was, so far, only observed for a few specific freshwater environments (e.g., Borrel et al. 2011). Those environments are, e.g., groundwater-fed lakes, which may tend to be enriched by SO 4 2− originating from the weathering of sulfur-containing rocks in the catchment. This leads to significant SO 4 2− reduction, especially in eutrophic lakes, where both the availability of organic matter and SO 4 2− concentrations are high. In the sediments, CH 4 is consumed to a great extent by the anaerobic oxidation of methane in sulfate-methane transition zones (SMTZ; Hinrichs et al. 1999;Boetius et al. 2000). The various pathways of CH 4 formation and degradation can be characterized and distinguished by the determination of the stable carbon isotopic composition of CH 4 (δ 13 C-CH 4 values). The isotopic signature of CH 4 sources is thereby dependent on isotope fractionation during the methanogenic and methanotrophic reactions as well as the δ 13 C values of the used substrates. Fractionation factors for the biogenic methanogenesis vary considerably under different conditions and environments, and differ between the methanogenic pathways (e.g., Conrad 2005). In general, biogenic CH 4 has relatively low δ 13 C-CH 4 values (− 40 to − 90‰), due to enrichment of light 12 C by microbial isotope fractionation (e.g., Rosenfeld and Silverman 1959). In lakes, biogeochemical processes in shallow sediments are usually controlled by rather fast variations in temperature and redoxcline depth, leading to considerable changes in porewater chemistry and turnover rates (e.g., Crill and Martens 1987). Thus, reduced products of turnover processes of organic substances are released into pore waters and diffuse into the water column. Here, oxic and/or anoxic re-oxidation processes are predominant in the sediments or in the water column, depending on the seasonal stratification of the lake. The purpose of this study was twofold: (1) to investigate the total methane mass balance of Lake Willersinnweiher and (2) to study the interaction of carbon, sulfur and manganese cycling within the sulphate-methane transition zones in the sediments and at the redoxcline in the water column of Lake Willersinnweiher (SW Germany) in greater detail. Lake Willersinnweiher has been shown to contain relatively high SO 4 2− concentrations (up to 2.4 mM), caused by the inflow of suboxic and SO 4 2− enriched groundwater (Schröder 2004). The interplay between groundwater and surface water, as well as high bioproduction in the summer season result in high sulfide (S-II) concentrations in the anoxic hypolimnion of Lake Willersinnweiher. Seasonal and spatial variations of the redoxcline depth and fluxes of the redox sensitive elements manganese and sulfur were observed in the sediments as well as the water column and correlate with the lake depth (Schröder 2004). Schröder (2004) further suggested that CH 4 cycling plays an essential role within the sediments of Lake Willersinnweiher, since the flux balances indicate an imbalance between the cycles of C and S, demonstrating that an additional C source within the sediment is required. Therefore, our study particularly focuses on the extent and seasonal variability of CH 4 production and consumption coupled to sulfur cycling in Lake Willersinnweiher. Materials and Methods Study site, hydrological, and hydrogeochemical parameters Lake Willersinnweiher is located in the plain of the Upper Rhine Graben near Ludwigshafen, Germany (49.499950 N; 8.397138 E; Fig. 1). It is one of four former gravel pits, which were built for the excavation of gravel and sand from the upper aquifer sediments of Pleistocene age. The fine-grained lake sediments are few centimeters to up to only 50 cm thick and act as a boundary layer to the sandy aquifer material. The lake has a size of 17 ha, is composed of two smaller basins and has a mean depth of about 8 m with a maximum depth of 20 m (Sandler 2000). Lake Willersinnweiher has been routinely monitored for its major pore-water and water-column data as well as groundwater/lake-water interactions for ca. 20 years (e.g., Sandler 2000;Schröder 2004;Wollschläger et al. 2007). The lake is classified as a eutrophic hardwater lake and fed by precipitation and groundwater. The average water residence time was estimated to be 3.7 a (Wollschläger et al. 2007). In recent years, Lake Willersinnweiher is monomictic with a circulation period in winter from November/December to March/April. During this period, the water column is fully oxic and O 2 reaches the upper few millimeters of the sediments (Schröder 2004). In the upper sediment layer, O 2 is consumed by reduced minerals as, e.g., FeS and pyrite, which are present in the sediments. These minerals are partly oxidized during the cold season. It was observed that the SO 4 2− reduction zone shifts downward in winter, and S(-II) is fully consumed in the oxic sediment layer, whereas it moves upwards during the stratification period, when the reduced products are released into the water column (Schröder 2004 Groundwater infiltrating the lake at the south-eastern shore has further passed at least one of the lakes located upstream of Lake Willersinnweiher (Wollschläger et al. 2007). Coupling between groundwater and surface water, as well as high bioproduction in the summer season result in high total S(-II) concentrations in the anoxic hypolimnion of Lake Willersinnweiher (Schröder 2004). Further investigations suggested seasonal and spatial variations of the redoxcline depth and fluxes of the redox sensitive elements Mn as well as S-compounds in the sediments as well as the water column, correlating with the lake depth. Sampling of the water column was carried out at the centre of the lake (profundal site) and sediment samples were taken from three sites (littoral, slope and profundal) with different lake depths ( Fig. 1; Supporting Information Table S1). The sediments, lake water, and groundwater inflow and outflow were sampled in May 2017, in January, May, August, and October 2018, as well as in January 2019. Sampling and analytical methods Pore-water analyses were performed on two parallel cores (core length~20-40 cm) of lake sediments recovered from three locations using a manual gravity corer. From one core, pore-water was extracted by rhizons (Rhizosphere Research Products, The Netherlands) with a pore size of 0.15 μm according to the procedure described by Seeberg-Elverfeldt et al. (2005) immediately after retrieving the cores from the lake floor. Rhizons were inserted through predrilled holes (sealed with tape while sediment sampling) in the liner used for sediment sampling and connected to the syringes using the leakage-free Luer-Lock adapters. Syringes were then drawn up and locked in this position so that the pore water was sucked into the air-free syringe. In this study, we typically sampled 5-10 mL of porewater, which is sufficient for analysis of all major components. After reaching this volume, the syringe was immediately sealed gastight with a Luer-Lock adapter and stored cold until further analysis either immediately after sampling in the field in an aliquot of the sample (e.g., S(-II)) or within 24 h in the laboratories at the Institute of Earth Sciences at Heidelberg University, Germany. The second core was used to determine the concentration of CH 4 in the pore-water. Lake sediment of defined sediment depths were sampled with a cut-off plastic syringe (3 mL) and transferred into a glass vial together with 5 mL of 1 M NaOH solution. All vials were immediately sealed with butyl rubber stoppers with crimp caps and stored dry and dark until further analysis. Additionally, subsamples were taken for density and porosity determination in spring 2017 only. For this purpose, sediment samples were weighed before and after drying for 24 h at 105 C. In the water column, the field parameters water temperature ( C) and dissolved oxygen (mM) were examined in situ with a multiparameter water probe EXO1 (Xylem Analytics, Norway). Samples for the analysis of dissolved ions in the lake water were collected in a side stream of the membranecoupled cavity ring down analyzer (M-CRDS system) and filtered through a 0.45 μm cellulose acetate filter. Samples for Fe and Mn were acidified with HNO 3 (6 M). Groundwater wells were sampled using a submersible Grundfos MP1 pump according to sampling regulations (DVGW 2011). Redox sensitive parameters were constant prior to sampling, indicating access to the flowing aquifer water. Samples were collected following the procedure described for the lake water samples. All pore-water, lake water and groundwater samples were stored dry and cool (4 C) until further analysis. Lake depth profiles of CH 4 concentrations and δ 13 C-CH 4values in the water column were performed by the M-CRDS system as described in detail by Hartmann et al. (2018). Each depth was measured for 20 min and CH 4 and δ 13 C-CH 4 values were averaged over the last 10 min of the measurement interval. For quality control, the working reference gas for CH 4 (10 ppmv CH 4 in synthetic air) was analyzed prior and at the end of the measurements. The results by the M-CRDS were verified for CH 4 concentrations by analyzing subsamples taken from the side stream of the M-CRDS system and analyzed for CH 4 concentrations in the laboratory. Pore-water and water column CH 4 concentrations were measured by gas chromatography (GC) with Barrier Ionization Discharge Detector (GC-BID, Shimadzu, Japan) and Flame Ionization Detector (GC-FID, Shimadzu, Japan), respectively. For quality control, working reference gases were analyzed along with GC-BID (1000 ppmv CH 4 in synthetic air) and GC-FID measurements (2.192 and 9.872 ppmv in synthetic air). Dissolved CH 4 concentrations were determined using Henry's law and solubility coefficients for CH 4 according to Yamamoto et al. (1976). The groundwater δ 13 C-CH 4 values were analyzed with stable isotope ratio mass spectrometry (GC-C-IRMS, Deltaplus XL, Thermo Finnigan, Bremen, Germany) following the procedure described in Hartmann et al. (2018) and values were normalized using two CH 4 working standards (isometric instruments, Victoria, Canada) with values of − 23.9 AE 0.2 and − 54.5 AE 0.2 (in ‰ vs. V-PDB). Please note that pore-water δ 13 C-CH 4 values could not be determined. Total dissolved concentrations of Na, K, Ca, Fe, and Mn were determined by inductively coupled plasma atomic emission spectroscopy (Agilent ICP-OES 720, USA). For quality control the reference material SPS-SW2 was analyzed along with the samples with a measurement precision for each element < 2%. The concentrations of SO 4 2− and nitrate (NO 3 − ) were analyzed by ion chromatography (Dionex ICS 1100, Thermo Fisher Scientific, Waltham, Massachusetts). The measurement precision for each element was < 3% and derived from long-term repeated analysis of reference material SPS-NUTR-WW1. Total sulfide (S-II) was determined photometrically (DREL 2800, Hach, Loveland, Colorado) using the Sulfide Test Spectroquant (Merck, Germany) immediately after sampling in the field in an aliquot of the sample, once a sufficient amount of pore water was retrieved during rhizon sampling. Dissolved inorganic carbon (DIC) were measured using the Shimadzu TOC-V CPH (Shimadzu, Japan) with a precision of < 2% derived from repeated analysis of an in-house standard water. For quality control of all geochemical analyses, ionic balances were calculated by the sum of major cation and anion concentrations. The deviations were lower than 5%. All laboratory analyses were performed at the Institute of Earth Sciences at Heidelberg University, Germany. Calculations Diffusive fluxes for all terminal electron acceptors (TEA) were calculated from concentration gradients assuming steady-state conditions according to Fick's first law (Eq. 1) where J is the diffusive flux of each TEA (mmol m −2 d −1 ), ; is the porosity of the sediment, C is the TEA concentration (mol m −3 ), and x is the depth (m). Calculations were based on the diffusion coefficient D S (m 2 s −1 ) for pore-waters, which is dependent on the tortuosity θ (dimensionless; Eq. 2). D w is the molecular diffusion coefficient in water (m 2 s −1 ), taken from Broecker and Peng (1974), and the tortuosity θ was calculated using measured porosities and Eq. (3) according to Boudreau (1996) Methane release from the sediments into the bottom water layer was calculated based on the concentration gradients given by the pore-water CH 4 concentration in the top 3 cm of the sediment. Sedimentary consumption rates in the sulfate methane transition zones were estimated by calculating local transition zone-related fluxes, where both SO 4 2− and CH 4 concentrations were decreasing and S(-II) and DIC showed a local maximum. An exemplary illustration of the method used to calculate the various fluxes is provided in the Supporting information (Fig. S1). Fractionation factors (α c ) for CH 4 oxidation in the water column were calculated after Claypool and Kaplan (1974) using the closed-system Rayleigh equation (Eq. 4) with δ 13 C o as the stable carbon isotope value of CH 4 in the bottom-near water layer and δ 13 C and as the stable carbon isotope value of CH 4 in the zone of CH 4 oxidation as well as f as the fraction of oxidized CH 4 . Whole-lake mass balance was calculated for the littoral, slope and profundal zones based on the diffusive fluxes estimated for the CH 4 sinks and sources addressed in this study. The CH 4 mass balance accounted for the CH 4 consumption in the sediments, diffusive sedimentary CH 4 release and CH 4 consumption at the redoxcline, groundwater CH 4 input and loss, as well as the diffusive CH 4 emissions into the atmosphere. The latter was estimated by the wind-mediated gas transfer across the surface water-atmosphere boundary layer after Wiesenburg and Guinasso (1979) and with the gas transfer velocities obtained by calculations based on Cole and Caraco (1998). Wind speeds were obtained from a nearby weather station (49.51 N/8.55 E). For the whole-lake mass balance, total fluxes per site and season (J season TEA=site [mol d −1 ]) were calculated by multiplying the sedimentary and water column fluxes (J TEA ) with the corresponding sediment surface area of the littoral, the slope as well as the profundal site and / or the redoxcline planar area (A Flux [m 2 ]), respectively, for each sampling date (Eq. 5). The sediment surface area as well as the planar lake area were taken from Schröder (2004) (Supporting Information Table S2). Uncertainties in the mass balance were analyzed by considering deviations of 10% each: in diffusion coefficients and pore-water fluxes, average wind speeds as well as in the sediment surface area. For the integrated annual CH 4 budget, we used a simplified model and assumed the total fluxes per site and season (J season TEA=site ) to be constant over a distinct period over time, that is characteristic for each stage in the lake stratification period (Supporting Information Table S1). Given that, we obtained the annual flux (J Annual [mol]) by summing the product of J All parameters used for the calculation of the whole-lake CH 4 mass balance and the annual budget for Lake Willersinnweiher are presented in the Supporting Information (Table S1). In addition, mass balance uncertainties were analyzed by considering deviations of 10% each: in diffusion coefficients and pore-water fluxes, average wind speeds as well as in the sediment surface area. All Parameters used for the calculation of the whole-lake CH 4 mass balance for Lake Willersinnweiher are presented in the Supporting Information (Table S2). Results Characterization of groundwater at Lake Willersinnweiher The groundwater feeding Lake Willersinnweiher showed major contents of calcium (Ca), DIC and SO 4 2− (Ca-HCO 3 -SO 4 type water) with SO 4 2− concentrations of up to 2.3 mM ( Fig. 2). In comparison the groundwater outflow showed lower SO 4 2− but higher DIC concentrations suggesting that during the passage of the lake, SO 4 2− is consumed and DIC formed. The pH values were 8 and 8.3 for groundwater inflow and outflow, respectively, indicating a carbonate equilibrium in aquifer sediments. The groundwater at Lake Willersinnweiher showed anoxic conditions, indicated by redox potentials of − 200 mV and O 2 contents below < 0.05 mM (Table 1). Groundwater NO 3 − contents were low with a mean value of 8 μM over the year 2018. Dissolved Mn and Fe were observed for all sampling periods with mean concentrations of 14 and 47 μM, respectively. Methane groundwater concentrations upstream of Lake Willersinnweiher varied seasonally, ranging from 0.12 μM in winter to 6.75 μM in summer. Groundwater δ 13 C-CH 4 values varied between − 6.3‰ in late summer and − 10‰ in winter for the inflow and between − 18 and 1.6‰ for the outflow of Lake Willersinnweiher (Table 1; Supporting Information Fig. S2). Limnic conditions and aquatic geochemistry Lake Willersinnweiher is a monomictic system with circulation following oxic water conditions in winter (Figs. 3 and 4; Supporting Information Figs. S3-S5). In spring, the build-up of the thermal stratification of Lake Willersinnweiher forms the thermocline, separating warm epilimnic from colder hypolimnic water. The redoxcline, where reduced products of a b c Fig. 2. STIFF-diagrams of groundwater (a) inflow and (c) outflow as well as surficial Lake water during late-stage thermal stratification in 2018 (b). The groundwater inflow of Lake Willersinnweiher is classified as Ca-HCO 3 − -SO 4 2− type water, the outflow of Lake Willersinnweiher as Ca-HCO 3 type water. organic turnover processes in the sediments are re-oxidized in the water column, shifts upwards and reaches the thermocline during summer (Fig. 3). This leads to the formation of the well-oxygenated epilimnion separated from the anoxic hypolimnion by a thin transition layer (Fig. 3). The turnover of organic matter in the sediments of Lake Willersinnweiher is dominated by SO 4 2− reduction and methanogenesis. The reduced products such as Mn or S (-II) showed increasing pore-water concentrations and were released across the sediment-water interface into the bottom water during the year. Pore-water SO 4 2− decreased within the uppermost 5 cm of the sediments and dissolved S(-II) built up below to maximum concentrations (Fig. 4). Maximum S(-II) concentrations were found around 5-10 cm. Lowest pore-water SO 4 2− concentrations along with highest S(-II) concentrations were found in the sediments in October 2018. This is accompanied by lowest SO 4 2− concentrations and highest S(-II) Table 1. Mean values of parameters describing redox conditions in the groundwater inflow and outflow of Lake Willersinnweiher in the period of investigation. concentrations in the hypolimnion in autumn (Fig. 4). Epilimnic S(-II) concentrations were below the detection limit throughout the year. During winter, SO 4 2− concentrations were constant with depth, while S(-II) was not detectable in the entire water column. Pore-water CH 4 generally increased with sediment depth and showed maximum values at the profundal site (Fig. 4). Methane concentrations at the water-sediment interface were up to two orders of magnitude higher than in the hypolimnion (Fig. 4). Maximum CH 4 concentrations were found at maximum water depths, whereas minimum CH 4 concentrations (up to 60 nM) and highest δ 13 C-CH 4 values of up to − 35‰were found around the redoxcline. The δ 13 C-CH 4 values in the bottom-near waters decreased from − 65 to − 79‰ during summer. In general, δ 13 C-CH 4 values showed two distinct zones of the water column: the euxinic hypolimnion with depleted δ 13 C-CH 4 values (up to − 79‰) and the epilimnion with less negative values of δ 13 C-CH 4 (around − 51‰; Fig. 4). Remarkably, CH 4 oversaturation (up to 2200 nM) with respect to atmospheric CH 4 concentrations (> 3 nM) was found in the entire water column and CH 4 accumulated in the surface mixed layer Pore-water DIC concentrations significantly increased with depth with local maxima at 5-10 cm depth. The DIC concentrations were lower during winter than during late summer, with less pronounced seasonal impact for profundal sediments compared to littoral sediments. Maximum pore-water DIC concentrations of > 10 mM were found in late summer, coinciding with highest DIC concentrations of 3.1 mM in the hypolimnion. In the epilimnion, DIC concentrations decreased during the year and epilimnic DIC concentrations in summer (range from 1.08 to 1.4 mM) were substantially lower than in winter (range from 2.0 to 2.1 mM). Pore-water Mn concentrations peaked within the uppermost 8 cm of the sediments, with maximum concentrations (60 μM) found in October 2018 (Fig. 3). Dissolved Mn concentrations were highest in the bottom water and below detection limit in the epilimnion. Maximum water column Mn concentrations (up to 40 μM) were found in the redoxcline. Porewater Fe and NO 3 − concentrations are in the very low μMrange (up to 3 μM; Supporting Information Tables S3-S5). Sulfate methane transition zones The pore-water profiles of the redox sensitive parameters CH 4 , SO 4 2− and S(-II) in the sediments showed temporal dynamics at the slope site over the year. Maximum pore-water CH 4 concentrations increased from < 0.1 to > 1 mM over the summer months and pore water S(-II) concentrations built up with time to up to 6 mM in October 2018. The depth of CH 4 maxima and SO 4 2− minima, here classified as "sulfatemethane transition zones" in Fig. 5, were determined at all sites in spring 2017. In addition, the slope site showed two distinct transition zones in 5 and 10 cm depth. Over the year 2018, sulfate-methane transition zones were detected at the slope site only. Here, the depth of the zone decreased from May to October 2018 from below 22 to 8 cm (Fig. 5). Quantification of CH 4 consumption and release At Lake Willersinnweiher, sediment fluxes of CH 4 and SO 4 2 − showed maximum values of 0.66 and 4.56 mmol m −2 d −1 , respectively, whereas maximum Mn reduction rates were substantially lower (Supporting Information Table S6). Pore-water fluxes of Mn, DIC, SO 4 2− and CH 4 were generally increasing during summer and with increasing bathymetric depth of the sampled site. Sulfate reduction is the main pathway of the conversion of organic matter in the lake sediments (Supporting Information Tables S6; Fig. 6). The flux of electrons, calculated for the reduction (CH 4 and SO 4 2− ) and the corresponding product (DIC and S(-II)) in the transition zone also follow a seasonal pattern at all sites (Fig. 6). Fig. 6). Comparing the diffusional fluxes in the sulfate-methane transition zone in relation to their total values in the sediments of Lake Willersinnweiher, shows that sulfate reduction in the this zone generally accounts for between 10 and 45% of the total sulfate reduction (Fig. 7). The extent of the sulfate reduction in the transition zone is strongly dependent on the site. Profundal sediments generally show lower proportions of sulfate reduction in this zone (10-15%) than the littoral and slope sediments. The percentage of sulfate reduction in the transition zone of profundal sediments decreases during the summer months to minimum values of 10% and increases to maximum values (15%) in spring. In contrast, littoral sediments show minimum percentages in spring (28%) and constantly increasing ratios throughout the year to up to 45% in winter. The percentages of DIC turnover rates in the sulfatemethane transition zones are around 65% and are temporally and spatially relatively homogeneous (Fig. 7). In the profundal zone, DIC SMTZ /DIC tot show minimum values during the winter months, whereas the littoral zone shows minimum percentages in summer and constantly increasing ratios over the remaining year. The largest variations in DIC SMTZ /DIC tot ratios are found in the slope zone, where the percentage is halved in autumn, compared to the spring and summer levels. Figure 7 shows that about~20-80% of the upwards diffusing CH 4 is oxidized in the sulfate-methane transition zones before reaching the water column. Profundal sediments thereby generally show substantially lower proportions of anerobic oxidation of methane (< 40%) than the littoral and slope sites (Fig. 7). The percentages of diffusive CH 4 fluxes into the transition zone show a strong seasonality, notably in the shallower zones of Lake Willersinnweiher. In the littoral zone, around 75% of total diffusive sedimentary CH 4 flux occur within the sulfate-methane transition zone in winter, whereas summer fluxes only make up <25% of total CH 4 values. At all sites, CH 4 was considerably released across the sediment-water interface into the bottom water, ranging from 0.02 to 0.41 mmol m −2 d −1 at the littoral sites and up to 0.66 mmol m −2 d −1 at the profundal sites, respectively ( Fig. 7e & Table 2). The maximum CH 4 release rates were found in early summer for littoral sediments and in late summer for slope and profundal sediments. The rates for CH 4 release from the sediments are thereby substantially dependent on the sulfate reduction within the transition zone: the higher the sulfate reduction rates in this zone, the more CH 4 is oxidized and the less CH 4 is released via diffusion from the sediments into the water column of Lake Willersinnweiher (e) Methane release from the sediments into the water column in respect to the proportion of sulfate reduction in the transition zone in relation to total sulfate reduction rates in the sediments of Lake Willersinnweiher. Slope sediments were sampled from May to October 2018 only. (Fig. 7e). Methane fluxes from bottom near waters into the redoxcline in the water column of Lake Willersinnweiher increased from 0.16 to 1.04 mmol m 2 d −1 from May to October 2018, respectively. Along with increasing CH 4 fluxes, the isotopic fractionation factor α c for CH 4 consumption within the water column increased from 1.014 (May) to 1.041 (October). Methane mass balance of Lake Willersinnweiher The CH 4 budget of Lake Willersinnweiher bases on the diffusive fluxes in the sediments and the water column as well as groundwater contribution and calculated atmospheric emission rates. Mass balance uncertainties were analyzed by considering deviations in diffusion coefficients, pore-water fluxes, average wind speeds as well as in the sediment surface area. In general, sedimentary and water column conversion and emission rates of CH 4 increased during summer stratification and show maximum rates in summer and minimum rates in winter (Supporting Information Fig. S7). In addition, their relative shares shifted gradually over the course of the year (Supporting Information Table S6). In winter and spring, considerably more CH 4 is consumed in the sediments than released into the water column at all sites. During summer this trend reverses and a larger proportion of CH 4 was released than consumed in the sediments of Lake Willersinnweiher in the late summer. At the redoxcline, CH 4 consumption rates (73 AE 11 mol d −1 ) are significantly higher than the rates for diffusive sedimentary release below (54 AE 8 mol d −1 ) in autumn. Thus, considerably larger quantities of CH 4 were consumed in the water column than released from sub-redoxcline sediments, based on the diffuse CH 4 fluxes. Data on CH 4 ebullition from profundal and littoral sediments was not available for the year 2018 and presented data was estimated as the difference between calculated rates for diffusional release and emission to the atmosphere. Methane emissions from the profundal zones were highest in summer (120 AE 9 mol d −1 ) and more than 20 times higher than in winter (5 AE 1 mol d −1 ). In the littoral zone, CH 4 oxidation and release rates were highest in summer and the difference between the calculated rates based on sedimentary diffusive fluxes and the surface water CH 4 concentration and emission persisted throughout the year. The main difference was observed in January, when the release rates were at a minimum (0.04 AE 0.01 mol d −1 ), but the measured littoral surface CH 4 concentrations were up to 1 μM, resulting in an emission of 110 AE 5 mol d −1 CH 4 . The integrated annual CH 4 budget of Lake Willersinnweiher is presented in Fig. 8. Groundwater added 0.5 AE 0.0 kmol and removed 0.5 AE 1.1 kmol CH 4 to both the hypolimnion and the epilimnion of Lake Willersinnweiher in 2018. The estimated rates for CH 4 consumption within the sulfate-methane transition zone in the sediments were lower for the littoral zone (1.9 AE 0.3 kmol) than for the slope (2.0 AE 0.3 kmol) and for the profundal zone (2.5 AE 1.3 kmol). The diffusive release from the sediments into the bottom water layer was highest in the profundal zone (8.4 AE 1.3 kmol). The littoral zone has released less than half (2.7 AE 0.4 kmol) and the slope zone less than a quarter (1.8 AE 0.3 kmol) of the profundal zone. The rates for CH 4 released from the subredoxcline sediments (profundal and slope zone) as well as CH 4 consumption at the redoxcline are consistent and CH 4 was consumed almost completely at the redoxcline (9.7 AE 1.5 kmol). Nevertheless, CH 4 oversaturation was present in the surface water layer and the profundal zone emitted 19 AE 1.4 kmol into the atmosphere over the year 2018. In the littoral zone the calculated diffusive CH 4 release from the sediments (2.7 AE 0.4 kmol) was almost 10 times lower than the surface CH 4 concentrations or calculated CH 4 emissions into the atmosphere (25 AE 1.8 kmol). Thus, more CH 4 was emitted from the surface water in littoral zones than released from the sediments by diffusive CH 4 flux. Discussion Groundwater feeding Lake Willersinnweiher is part of the groundwater system within the Upper Rhine Graben and Table 2. Calculated seasonal fluxes for CH 4 from the sediments into the bottom water layer of Lake Willersinnweiher. Fluxes for CH 4 oxidation at the redoxcline and its fractionation factors (α c ) for in the water column of Lake Willersinnweiher were calculated for May, August, and October 2018. In Lake Willersinnweiher, the cycling of both redox sensitive elements is more complex. In general, Fe is limited since Fe is bound as Fe(OH) 3 and FeS x in the oxic and euxinic water layer, respectively. The latter is partly oxidized during the winter circulation, when O 2 reaches the sediment surface. Under these conditions, sedimentary Mn(IV) minerals are also formed (Schröder 2004). With the build-up of the lakes' stratification, the sedimentwater interface becomes anoxic in early spring. By this, e.g., sedimentary Mn-oxides are reduced, and Mn(II) is released from the sediments into the water column. Manganese is thus significantly enriched below the redoxcline and contents are controlled by the sedimentary Mn(II) release and carbonate equilibria via the formation of, e.g., rhodochrosite (Schröder 2004). In addition, Mn diffuses towards the redoxcline, where it is re-oxidized and Mn(IV)-minerals are formed (Schröder 2004). In the following months, full stratification of Lake Willersinnweiher is reached from July/August to November/ December and the redoxcline decouples the oxic surface water from the euxinic hypolimnion. The rising sedimentary conversion rates of organic substances during summer, indicated by increasing DIC rates, result in an enhanced release and diffusion of further reduced species, such as Mn(II) and S(-II), into the water column and towards the redoxcline. Thus S(-II) might fuel a local Mn-shuttle by the reduction of Mn(IV)minerals via sulfide below the redoxcline (e.g., Havig et al. 2015). Due to the intense reduction of SO 4 2− and sulfur cycling, the lake sediments act as a sink for sulfur, whereas the sediments are the major source of DIC and CH 4 found in the euxinic hypolimnion of Lake Willersinnweiher. Calculated sedimentary SO 4 2− reduction rates of Lake Willersinnweiher Rates for consumption and release of CH 4 were estimated based on the calculated fluxes in the water column and the sediments of Lake Willersinnweiher. *Data on ebullition was not available within the frame of this study and presented data were estimated based on the discrepancy between the rates for diffusive release from the sediments and the calculated emissions. For detailed data of the whole-lake CH 4 mass balance, we refer to the Supporting Information Data S1. (Cappenberg 1975). As considerable pore-water concentrations of free sulfide were found, sulfate reducing bacteria might out-compete the methanogens for available H 2 in the sediments of Lake Willersinnweiher. Methane is consequently produced via the hydrogenotrophic pathway (carbonate reduction) at greater sediment depths, where SO 4 2− is depleted and sulfate reducing bacteria are inactive. However, sulfate-methane-interaction is not only affecting CH 4 formation, but also CH 4 consumption in the sediments. At Lake Willersinnweiher, sedimentary SO 4 2− minima coincide with the depth of CH 4 minima, along with local maxima of pore-water S(-II) and DIC concentration, indicating the presence of sulfate-methane transition zones (SMTZ) at all sites. Sulfate-methane transition zones are a common feature first described for marine sediments due to high pore-water SO 4 2− concentrations (Martens and Berner 1974), whereas pronounced SMTZ have so far only been observed for a few specific freshwater environments (Borrel et al. 2011;Schubert et al. 2011;Timmers et al. 2016). In the sediments, CH 4 is most likely oxidized with SO 4 2− as electron acceptor by methane-oxidizing archaea (anaerobic methanotrophs) and sulfate reducing bacteria consuming upward migrating CH 4 (Hinrichs et al. 1999;Boetius et al. 2000). The anaerobic CH 4 oxidation in freshwater environments is usually coupled to the reduction of the energetically more favorable electron acceptors, such as nitrate and nitrite (e.g., Raghoebarsing et al. 2006) or Fe(III) and/or Mn(IV) (e.g., Beal et al. 2009). In Lake Willersinnweiher, NO 3 − and Fe concentrations are negligible in the pore-water and dissolved Fe is trapped as sulfides in the sulfur-dominated sediments (Schröder 2004). Therefore, we assume that NO 3 − and Fe do not play any role in sedimentary anaerobic CH 4 oxidation in Lake Willersinnweiher. The anaerobic CH 4 oxidation by the oxidation of Mnoxides might be an additional pathway in the sediments of Lake Willersinnweiher since Mn-oxides as electron acceptors are energetically more favorable than SO 4 2− . Intense Mncycling and the formation of Mn(IV) minerals in the sediments was reported for Lake Willersinnweiher during winter, when the uppermost 2-3 mm of the sediment become oxic (Schröder 2004). The formation of the anoxic hypolimnion in spring then leads to the lake internal Mn cycling and a rereduction of the Mn-oxides via sulfide in the sediments. This phenomenon has been described in the past for various eutrophic lakes (e.g., Davison and Tipping 1984). At Lake Willersinnweiher, dissolved Mn was found in the pore-water (up to 60 μM), potentially indicating anaerobic CH 4 oxidation via Mn-oxides. Direct anaerobic CH 4 oxidation via Mn-oxides as electron acceptors might be quantitatively significant but subordinated to the anaerobic CH 4 oxidation via SO 4 2− due to the dominance of SO 4 2− reduction in the sediments of Lake Willersinnweiher. However, it is more likely that Mn supports sedimentary anaerobic CH 4 oxidation via SO 4 2− through reoxidation of reduced S species, as it was also recently described for Lake Cadagno (Su et al. 2019). The interaction of methanogenesis, sulfur cycling and the recycling of DIC produced by anaerobic CH 4 oxidation leads to a sedimentary carbon (re-)cycling at the top of the methanogenic zone. Our results indicate, that 50%-70% of sedimentary DIC produced results from anaerobic CH 4 oxidation within the transition zone, which in turn might fuel a secondary methanogenesis via CO 2 reduction (see Fig. 9). This phenomenon might explain the imbalance between the normalized CH 4 , DIC, S(-II) and SO 4 2− fluxes in the sulfatemethane transition zone of Lake Willersinnweiher. The SMTZassociated secondary methanogenesis or "cryptic methane cycling" was previously reported to account for up to 60% of the organoclastic sulfate reduction within marine SMTZ (Beulig et al. 2019). These processes and interactions between carbon and sulfur as well as Mn cycling might be recorded in the sulfur (δ 34 S values of SO 4 2− and S(-II)) and carbon (δ 13 C values of DIC and CH 4 ) isotope composition of the specific compounds involved in the sediments of Lake Willersinnweiher. Future investigations of sedimentary δ 13 C and δ 34 S values are therefore desirable and would provide further evidence of anaerobic CH 4 oxidation in the sediments of Lake Willersinnweiher by covering all major constituents in carbon cycling. At Lake Willersinnweiher, the extent of the interaction of carbon and sulfur cycling in the sediments is mainly driven by seasonal variations of the thermocline and redoxcline depth. The seasonal changes in temperature thereby lead to considerable changes in pore-water chemistry and turnover rates, controlling the biogeochemical processes in sediments (e.g., Crill and Martens 1987). In a short period, apparently occurring in spring, sulfatemethane transition zones are clearly identifiable at all Fig. 9. Simplified scheme of the interaction of carbon, sulfur and manganese in freshwater sediments of Lake Willersinnweiher with AOM as anaerobic oxidation of methane. The interaction between sulfur and manganese cycles might also be observed at the redoxcline of water column. Modified after Borowski et al. (1997) for the conditions at Lake Willersinnweiher. sampling sites. This period is characterized by rather moderate sedimentary organic matter degradation and mineralization-indicated by moderate DIC fluxes-and moderate sulfate reduction rates (below 1.5 mmol m −2 d −1 ), as well as a deeper transition zone depth within the sediments. When the thermal stratification of the lake builds up, the redoxcline shifts upwards in the water column resulting in a vertically shifting of the sedimentary transition zones during the summer months. Secondary methanogenesis and intense sulfur cycling within the sediments also trigger non-steady state conditions in the settings of anaerobic CH 4 oxidation (e.g., Dale et al. 2008). These variable conditions also result in occasionally more than one transition zone within the slope sediments, where maximum temperature gradients are present. Seasonal displacement of the transition zone depths have also been previously reported for costal marine environments (e.g., Dale et al. 2008). As the redoxcline zone shifts further up from the sediments into the water column, due to the seasonal evolution of the sedimentary organic matter degradation and mineralization, pore-water fluxes increase considerably at all sites. Total DIC concentrations and fluxes increase during the summer months, while the SMTZ-associated DIC formation in littoral and profundal sediments were highest in spring and descended during the remaining year. The extent of sulfate reduction within the sulfate-methane transition zones also substantially controls the extent of anaerobic CH 4 oxidation in the sediments of Lake Willersinnweiher: higher sulfate reduction rates in this zone results in higher rates of anaerobic CH 4 oxidation. In the profundal zone, sulfate reduction rates in the transition zone is generally less significant for total sulfate reduction rates than in the shallower areas, resulting in substantially higher diffusional CH 4 release from the sediments than in the littoral and slope zones. Hence, diffusional CH 4 release into the water column is linked to the anaerobic CH 4 oxidation efficiency in the transition zone, which might be controlled by the different nature of OM degradation and the substrate availability for sulfate reducing bacteria changing with lake depth (Boetius et al. 2000). Substantially increasing anaerobic CH 4 oxidation rates were observed for the slope and littoral site, most likely due to the fast seasonal temperature changes (Figs. 6, 7), which was reported to be the main factor behind the seasonality in rates in the sulfate-methane transition zones (Dale et al. 2008). At Lake Willersinnweiher, the availability of substrates is also an important factor controlling sedimentary processes and overall benthic metabolism (e.g., Westrich and Berner 1984). The proportion of transition zone-related sulfate reduction rates in the total sulfate reduction rates has considerably changed and halved at all sites in autumn 2018, indicating seasonal variations in dissolved substrates for sulfate reducing bacteria. Potentially, the Mn(IV) minerals formed in winter are used up and are therefore unavailable as oxidants for the sedimentary sulfur (re-)cycling in late summer. In parallel, CH 4 release into the water column increased fivefold and thereby considerably more compared to the anaerobic CH 4 oxidation rates within the sediment. This might indicate that oxidation activity in the sedimentary transition zone could not keep up with increasing methanogenesis during summer and, hence, the relative impact of transition zone-related anaerobic CH 4 oxidation on CH 4 release into the water column decreased considerably. However, it must be noted that this work does not include studies on sedimentary microbial activities and it remains speculative whether the activity of anaerobic CH 4 oxidation (I) has decreased, (II) has partially shifted into the euxinic bottom water layer, as it was suggested for other freshwater or marine environments (e.g., Zigah et al. 2015), or whether (III) these dynamics are also overprinted by other processes, such as a more dynamic methanogenesis or CH 4 diffusion within the sediments. These questions cannot be answered within the frame of this study, as they would require further detailed geochemical and microbial investigations of the sediment and the water column. The extent of anaerobic CH 4 oxidation in the water column would only be subordinate to sedimentary anaerobic and aerobic CH 4 oxidation. The dominant CH 4 oxidation pathway in the water column of Lake Willersinnweiher is the oxidation via O 2 at the redoxcline (see Fig. 8), where more than 98% of the aquatic CH 4 is consumed through aerobic methanotrophs. The δ 13 C-CH 4 values increase towards the redoxcline, indicating aerobic microbial oxidation within the water column of Lake Willersinnweiher (e.g., Barker and Fritz 1981). The calculated α c for redoxcline CH 4 oxidation at Lake Willersinnweiher ranges from 1.014 to 1.041 for spring and late summer, respectively, and is consistent with reported fractionation factors of aerobic oxidation ranging from 1.003 to 1.039 (e.g., Zigah et al. 2015). As a consequence, the interaction of anaerobic (sediment) and aerobic (water column) oxidation of CH 4 acts as an effective barrier to minimize CH 4 release into the surface water and the atmosphere during summer stratification. Emissions during the summer stratification mainly result from CH 4 oversaturation (up to 2200 nM) compared to the atmospheric CH 4 equilibrium concentration of~3 nM, which was present in the surface water of Lake Willersinnweiher all year. In 2018, littoral surface water CH 4 concentrations were considerably higher than profundalal, which is in good agreement with previous studies on the spatiotemporal distribution pattern of CH 4 at other lakes (e.g., Hofmann 2013). The high surface water CH 4 concentrations of the littoral zones and, hence, CH 4 emissions into the atmosphere, cannot be solely explained by the diffusional fluxes from the sediment ( Fig. 8; Supporting Information Fig. S7). Our CH 4 mass balance indicates that only about 10-30% of the littoral CH 4 emissions result from diffusive release from the littoral sediments, which is in very good agreement with previous studies (e.g., Bastviken et al. 2004). Since anaerobic CH 4 oxidationin the sulfate-methane transition zone effectively limits the diffusive CH 4 release from the sediments, littoral CH 4 release at Lake Willersinnweiher most likely originates from "direct" release pathways such as plant-mediated and ebullitive transfer. Plant ventilation might occur as the littoral areas at lake Willersinnweiher are mostly overgrown by submerged plants that might lead to intense exchange between sediment, water and atmosphere. These root-associated methane emissions have been reported as the most important pathway of CH 4 emissions in wetlands (e.g., Schütz et al. 1989). Ebullition is the transport of gas bubbles super-saturated with CH 4 from sediments into the water column and the atmosphere and occurs in marine and lacustrine environments (e.g., Bastviken et al. 2004). Methane bubbles released from the sediment were identified as a main pathway for CH 4 emissions from lakes and are known to show significant spatial and temporal heterogeneity. At Lake Willersinnweiher, ebullition might explain the discrepancy in the CH 4 -budget for the littoral zones especially during winter, when surface water CH 4 concentrations are high, but both sedimentary aerobic and anaerobic CH 4 oxidation reduces diffusional CH 4 release to a minimum. Although ebullition might be generally less intense during winter and spring due to low (pore-)water temperatures and therefore slower organic matter decomposition, but was shown to be the main pathway for CH 4 emission from Lake Kinneret during winter (Schmid et al. 2017). Due to the dominance of "direct" CH 4 release pathways in the littoral zone, our mass balance might underestimate the total CH 4 emission rates for Lake Willersinnweiher as CH 4 gas bubbles are known to be directly emitted to the atmosphere (McGinnis et al. 2006). Thus, future research should focus on the quantification of ebullition volume and fluxes derived from sediments and its spatial variability, as this seems to be the important CH 4 transport pathway at least for littoral sediments at Lake Willersinnweiher. The rates for diffusive CH 4 release from the sub-redoxcline sediments (profundal and slope zone) and CH 4 consumption at the redoxcline are consistent, indicating that rather diffusional than ebullitive or plant-mediated release is driving the profundal CH 4 release into the deeper water layer of Lake Willersinnweiher. By this, the probability that CH 4 reaches the surface water via bubbles transport from profundal sediments seems to be rather negligible and sedimentary released CH 4 is mediated by the extent of anaerobic CH 4 oxidation within the sulfate-methane transition zones. In the profundal zone, surface water CH 4 concentrations are higher and δ 13 C-CH 4 values shifted towards lighter values in the epilimnion compared to the metalimnion, during the stratification period. This indicates that the redoxcline at Lake Willersinnweiher decouples the surface water from the deep water reservoirs (Fig. 4; Supporting Information Figs. S6 and S8). Although the sulfate-methane transition zone acts as an effective barrier by minimizing CH 4 release into the water column, CH 4 accumulates in the anoxic, deep water layer during summer. Stored CH 4 is then emitted during the autumn overturn period by vertical water mass mixing ("storage flux"). The extent of the sudden release of CH 4 previously sealed off from the atmosphere is unknown to date, but will result in temporarily high emission rates of CH 4 into the atmosphere (e.g., Bastviken et al. 2004). This contrasts with the permanent CH 4 emission due to the CH 4 oversaturation of the profundal surface water layer, which occurs throughout the year and therefore requires a source located in the upper oxic water layer. Methane oversaturation in the oxic water column has been consistently reported for various stratified lakes and might result from transport of CH 4 from the littoral zones (Murase et al. 2005;Peeters and Hofmann 2015) as profundal CH 4 concentrations in Lake Willersinnweiher were higher in the littoral than profundal. Based on the higher CH 4 concentrations in the littoral areas, a littoral origin of the CH 4 oversaturation in Lake Willersinnweiher cannot be excluded. However, the data of this study cannot provide any evidence for this assumption, as no systematic transect of water column profiles over lake width was conducted. Epilimnic CH 4 oversaturation was also attributed to internal CH 4 production (e.g., Grossart et al. 2011). In this context some recent studies have unambiguously shown, applying stable isotope techniques, that CH 4 is produced by organisms such as freshwater and marine algae (e.g., Klintzsch et al. 2019;Hartmann et al. 2020) and cyanobacteria (Biži c et al. 2020) and even in the presence of oxygen (Keppler et al. 2009). These alternative processes might also account for the observed elevated CH 4 concentrations at the profundal surface water of Lake Willersinnweiher, since at least its surrounding lakes are facing growing eutrophication and intense cyanobacterial growth in the last years (https://badeseen.rlp-umwelt.de/servlet/is/1137/, Accessed 11 November2020). Detailed investigations of this newly discovered oxic CH 4 production are currently ongoing with the aim to decipher CH 4 formation in the epilimnion of the lake Willersinnweiher. At Lake Willersinnweiher, groundwater inflow might represent an additional, and so far, overlooked source of surface water CH 4 in lakes, as the feeding groundwater is significantly enriched in CH 4 (up to 2.4 μM) compared to the lake's surface water. By this, groundwater would not only drive the extensive anaerobic CH 4 oxidation by providing SO 4 2− to the sediments of Lake Willersinnweiher, it also affects the total CH 4 budget and thus also the emission of Lake Willersinnweiher, as the surface water CH 4 oversaturation occurred all year (Fig. 8). Groundwater δ 13 C-CH 4 values ranged between − 18 and 1.6‰ and are rather unusual for natural environments (e.g., Schloemer et al. 2016). Background data of CH 4 concentrations and δ 13 C-CH 4 values in shallow aquifers are barely reported for (southern) Germany thus far. The origin of groundwater CH 4 remains still unanswered to date. Groundwater CH 4 could originate from various sources ranging from CH 4 as a product of (I) oxidative biodegradation of (highly volatile chlorinated) hydrocarbons in a contamination site to (II) organic-rich sediments from old branches of the river Rhine or (III) from lakes upstream (Wollschläger et al. 2007; Supporting Information Table S3). In general, CH 4 formation via oxidative biodegradation of (highly volatile chlorinated) hydrocarbons in contamination sites might result in naturally uncommon δ 13 C-CH 4 values (e.g., Hunkeler et al. 2005). The isotopic composition of CH 4 is thereby controlled by the δ 13 C-ratio of the original substrate. Contamination-driven biodegradation and methanogenesis is likely but, however, contamination sites are not documented for the nearby environment. Potentially, these substrates might derive from the organic-rich sediments from old branches of the river Rhine which span over a large area upstream of Lake Willersinnweiher. On the other hand, aquifer-related methanogenesis is substantially limited in the presence of SO 4 2− and CH 4 originating from these sediments might also have been fully oxidized within the lakes upstream before reaching Lake Willersinnweiher as Wollschläger et al. (2007) have shown that groundwater infiltrating the lake has passed at least one of the lakes located in the catchment of Lake Willersinnweiher. Here, lake water infiltrates into the aquifer through the porous littoral sediments that are significantly enriched in CH 4, before entering the aquifer and Lake Willersinnweiher. The resulting groundwater CH 4 with δ 13 C-CH 4 values between 1.6 and − 18‰ would then imply extreme fractionation factors for CH 4 oxidation processes in the groundwater, which might be coupled to the reduction of NO 3 − , NO 2 − , Fe, and Mn as O 2 is absent or limited within the groundwater. Further investigations of the lake waters and sediments upstream as well as along the groundwater flow path are therefore crucial for the understanding of the complex interactions of groundwater, surface water and sediments within Lake Willersinnweiher and comparable lakes in the Upper Rhine Graben. Conclusion The calculated total CH 4 mass balance of Lake Willersinnweiher indicates that several important processes including formation, consumption and transport/diffusion are controlling the fluxes to the atmosphere. In this study, we focused on the total methane mass balance of Lake Willersinnweiher and the interaction of carbon, sulfur and manganese cycling within the sulfate-methane transition zones in the sediments and at the redoxcline in the water column of Lake Willersinnweiher (SW Germany). Groundwater feeding the lake with SO 4 2− leads to anaerobic CH 4 oxidation in the upper sediment layers. Upward migrating CH 4 is thereby most likely consumed via SO 4 2− in the transition zone, which are uncommon in freshwater environments, to date. In contrast to its marine analogues, the transition zones in freshwater lakes are subject to significant seasonal dynamics in lake conditions and biogeochemical processes. These nonsteady state conditions within the sediments and the water column result in a seasonally dependent impact of sulfatemethane transition zones on methane cycling in freshwater environments. The interaction of anaerobic CH 4 oxidation in the sulfatemethane transition zones and aerobic CH 4 oxidation in the water column thereby acts as an effective barrier, minimizing CH 4 release into the surface water and the atmosphere. Therefore, we postulate that most CH 4 released from the profundal water surface to the atmosphere might stem from transport of CH 4 produced in the littoral sediments or originate from insitu production of aerobic organisms in the epilimnion. Significant CH 4 concentrations in the groundwater inflow might represent an additional source for year-round CH 4 oversaturation in the surface water of Lake Willersinnweiher. By this, groundwater is not only providing SO 4 2− for anaerobic CH 4 oxidation in the sediments, but also affecting CH 4 emissions of Lake Willersinnweiher. Groundwater-lake interactions are generally known to have severe implications on biogeochemical processes in the lakes. Often, groundwater-fed lakes tend to be enriched by SO 4 2− originating from the weathering of sulfurcontaining rocks in the catchment, leading to significant SO 4 2− reduction, especially in eutrophic lakes, where both the availability of organic matter and SO 4 2− concentrations are high. Lake Willersinnweiher is therefore exemplary for the many monomictic or dimictic lakes in the Upper Rhine Graben in Germany and probably also in other sedimentary basins that at least temporarily develop an anoxic hypolimnion. Hence, anaerobic CH 4 oxidation via sulphate as an electron acceptor might play a more significant role in controlling methane fluxes from limnic systems than previously estimated. Author Contributions Jan F. Kleint and Yannic Wellach developed the hypotheses supported by Margot Isenbeck-Schröter. Jan F. Kleint visualized the data and prepared the original draft, Margot Isenbeck-Schröter supervised. Sampling was planned and conducted by Jan F. Kleint and Yannic Wellach. Instrumentation and methodology were provided by Margot Isenbeck-Schröter for hydrogeochemical and in-field methane analyses and Frank Keppler for gas and stable carbon isotope analysis in the laboratory. Jan F. Kleint, Yannic Wellach, and Moritz Schroll collected the data and Margot Isenbeck-Schröter, Frank Keppler validated it. Jan F. Kleint, Yannic Wellach, Moritz Schroll, Frank Keppler, and Margot Isenbeck-Schröter interpreted the results. The manuscript was written under the lead of Jan F. Kleint, with contributions of all authors.
2021-05-04T22:05:20.009Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "6c5b80ea4f23e506b340b44548f0d9c36403b4d6", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lno.11754", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "cfbac47415efb5b55ab849254439b9ad4f9cb1c9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
212678098
pes2o/s2orc
v3-fos-license
Differentiated agonistic antibody targeting CD137 eradicates large tumors without hepatotoxicity Conflict of interest: BW, LL, WKM, CLL, NZ, MS, and TJS are all current employees of Compass Therapeutics LLC and are partners in the LLC. UE, CC, SO, WG, PB, MO, SQH, JL, WFC, and RT are former employees of Compass Therapeutics and are partners in the LLC. UE, SQH, and WG are now employees of Akrevia Therapeutics. CC is now an employee of CRISPR Therapeutics. SO is now an employee of Bluebird Bio. RT and JL are now employees of TCR2 Therapeutics. PB is now an employee of Sanofi. MO is now an employee of Bristol-Myers Squibb. WFC is now an employee of Astellas Pharmaceuticals. ACA is a member of the scientific advisory board for Tizona Therapeutics, Compass Therapeutics, and Zumutor Biologics and is a paid consultant for Aximmune. ACA, CL, and CW received research funding from Compass Therapeutics. PB, MS, JL, RT, CLL, and UE are inventors on the following issued U.S. patents held by applicant Compass Therapeutics: patent nos. 10,279,038B2; 10,279,039B2; and 10,279,040B1; these patents cover pharmaceutical compositions comprising CTX-471 and methods of using CTX-471 for treating cancer or inducing antitumor immune response in cancer patients. Introduction While checkpoint inhibitors targeting PD-1 or CTLA-4 have been transformative therapies in immuno-oncology, they continue to have limited efficacy in the majority of patients in most indications. Agonistic antibodies against costimulatory immune receptors, including members of the TNF receptor superfamily (TNFRSF), have potential to complement checkpoint blockers by directly activating immune cells (1). However, early attempts to develop these antibodies clinically have struggled to find an appropriate balance between efficacy and toxicity. Cell surface glycoprotein CD137 (also known as 4-1BB and TNFRSF9) is a member of the TNFRSF that is expressed on activated T cells, Tregs, NK cells, monocytes, DCs, and tumor endothelial cells (2). Upon interaction with its cognate ligand, CD137L, CD137 forms stable homotrimers that recruit the TRAF-1/2 signaling adaptors to stimulate downstream activation of the NF-κB transcriptional pathway. Activation of CD137 delivers potent costimulatory signals to CD8 + cytotoxic T cells, promoting cell proliferation, facilitating differentiation into memory cells, and delivering important survival signals. The incorporation of the intracellular signaling domain of CD137 has improved the clinical activity of the second generation of CAR T cell therapies, indicating an important role of CD137 signaling in effective antitumor immunity (3,4). CD137 stimulation also enhances NK cell proliferation and IFN-γ production, and it increases the ability of NK cells to perform antibody-dependent cell-mediated cytotoxicity (ADCC) against tumor cells, demonstrating potential of CD137 agonists to invoke and bridge innate and adaptive immunity (2). CD137 (4-1BB) is a member of the TNFR superfamily that represents a promising target for cancer immunotherapy. Recent insights into the function of TNFR agonist antibodies implicate epitope, affinity, and IgG subclass as critical features, and these observations help explain the limited activity and toxicity seen with clinically tested CD137 agonists. Here, we describe the preclinical characterization of CTX-471, a fully human IgG4 agonist of CD137 that engages a unique epitope that is shared by human, cynomolgus monkey, and mouse and is associated with a differentiated pharmacology and toxicology profile. In vitro, CTX-471 increased IFN-γ production by human T cells in an Fcγ receptor-dependent (FcγR-dependent) manner, displaying an intermediate level of activity between 2 clinical-stage anti-CD137 antibodies. In mice, CTX-471 exhibited curative monotherapy activity in various syngeneic tumor models and showed a unique ability to cure mice of very large (~500 mm 3 ) tumors compared with validated antibodies against checkpoints and TNFR superfamily members. Extremely high doses of CTX-471 were well tolerated, with no signs of hepatic toxicity. Collectively, these data demonstrate that CTX-471 is a unique CD137 agonist that displays an excellent safety profile and an unprecedented level of monotherapy efficacy against very large tumors. Promising preclinical results led to the development and clinical testing of 2 agonistic anti-CD137 antibodies in human cancer patients. Urelumab (BMS-663513), a fully human IgG4 monoclonal antibody developed by Bristol-Myers Squibb, induced inflammatory hepatotoxicity at doses ≥ 0.3 mg/kg, limiting its therapeutic window (14). In contrast, utomilumab (PF-05082566), a fully human IgG2 antibody developed by Pfizer, was safe at doses up to 10 mg/kg but demonstrated limited clinical efficacy (15). The starkly differing activity profiles of these 2 antibodies are likely related to one or more of the known differences between them: targeted epitope, CD137 ligand blocking capacity, IgG subclass, and the level of intrinsic agonistic activity (16). Neither of these antibodies are mouse cross-reactive, preventing the direct study of their pharmacology and toxicity in mice. However, antibody clone 3H3 has been widely studied and is known to be a strong agonist of mouse CD137 (17) that induces hepatic inflammation (18)(19)(20). Given the recent insights into the function of TNFR agonist antibodies that implicate epitope, affinity, and IgG subclass (16,(20)(21)(22)(23)(24) as critical features, an opportunity exists for CD137 agonists to achieve differentiated therapeutic activity and improved safety profile. Here, we describe preclinical characterization of CTX-471, a fully human IgG4 agonist of CD137 that displays a favorable and well-differentiated efficacy-safety profile that is attributed to a unique epitope, optimized affinity, and Fcγ receptor-dependent (FcγR-dependent) activity. CTX-471 binds to a unique epitope within CD137 that is conserved in human, cynomolgus monkey, and mouse. CTX-471 is a fully human antibody that binds with moderate monovalent affinity to recombinant human or cynomolgus macaque CD137 (K D = 50 nM for human, 61 nM for cyno) and cross-reacts with lower affinity to mouse CD137 (K D = 748 nM; Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.133647DS1). CTX-471 similarly binds to primary human or cyno T cells with approximately equal affinity (EC 50 = 1.4 nM for human, 0.66 nM for cyno) and with lower affinity to murine T cells (EC 50 = 36 nM; Supplemental Figure 2, A-C). To obtain the most pharmacologically relevant data from preclinical mouse models, CTX-471 was affinity matured to generate CTX-471-AF, which has increased affinity for both recombinant mouse CD137 (K D = 86 nM, Supplemental Figure 1A) and on murine T cells (EC 50 = 2.8 nM; Supplemental Figure 2C). Both values are similar to binding of parental CTX-471 to human CD137, supporting the use of CTX-471-AF as an affinity-matched mouse surrogate. CTX-471 binds CD137 at a nonligand competitive epitope, as demonstrated by the ability of recombinant CD137L to bind to a preformed complex of CTX-471 and CD137 ( Figure 1A). As reported previously, CD137L also binds to a preformed complex of CD137 and urelumab, while utomilumab blocks ligand binding (16). In domain mapping experiments using truncated versions of CD137, CTX-471 binds to the membrane proximal cysteine rich domains 3-4 (CRD3-4), similar to utomilumab and in contrast to urelumab, which binds to CRD1-2 ( Figure 1B). CTX-471 binding is significantly reduced by mutations at amino acid K114 with additional contributions from E111, T113, N126, I132, and P135 (Supplemental Figure 1B), consistent with an epitope in CRD3-4 on a face of CD137 directed away from the ligand binding site ( Figure 1C). In contrast, urelumab binds at a membrane distal epitope centered on amino acid R41 (16). Point mutation K113A in mouse CD137 (analogous to K114 in human) eliminates binding of CTX-471-AF, confirming that the epitope is conserved between species (Figure 1, D and E). Mutation of amino acid Y40 in mouse CD137 (analogous to R41 in human CD137) disrupts binding of the murine CD137-specific surrogate antibody 3H3 ( Figure 1D), suggesting that it binds to CRD1 at a highly similar epitope to where urelumab engages the human receptor ( Figure 1E). CTX-471 stimulates primary T cells from human, monkey, and mouse with intermediate potency. The activity of TNFR agonist antibodies is influenced by FcγR interactions that promote receptor clustering (20)(21)(22)(23)(24). We selected human IgG4 as the backbone for CTX-471 based on the ability of this isotype to engage the Fc receptors FcγRI (CD64) and FcγRIIb (CD32b) to drive receptor cross-linking, while avoiding binding to FcγRIIIa (CD16a) and ADCC-mediated depletion of immune effector cells expressing CD137 (25,26). In coculture experiments with anti-CD3 activated primary T cells and CHO cells engineered to express CD32b, stimulation with CTX-471 or CTX-471-AF induced IFN-γ production from human T cells in a dose-dependent manner, with a low nanomolar EC 50 ( Figure 2A). Importantly, CTX-471 showed an intermediate level of activation that fell between the superagonist activity of urelumab and the very weak activity of utomilumab ( Figure 2A). CTX-471 consistently increased activation of T cells from multiple human donors ( Figure 2B). When WT CHO cells lacking expression of CD32b were used in the coculture assay, there was no appreciable IFN-γ induction, demonstrating the FcγR-dependent nature of the agonistic activity of CTX-471 ( Figure 2C). CTX-471 and CTX-471-AF also induced IFN-γ production from mouse (Supplemental Figure 2D) and cynomolgus monkey (Supplemental Figure 2E) T cells in analogous coculture assays, confirming cross-functionality across species. The 3H3 antibody displayed superagonist activity in the mouse assay, further supporting its use as a functional surrogate for urelumab (Supplemental Figure 2D). CD137L potentiates CTX-471-induced NF-κB signaling. Signaling through receptors of the TNFR superfamily, such as CD137, requires receptor trimerization and clustering for efficient signaling (27). To measure the ability of CD137 antibodies to induce clustering, we built a split-luciferase complementation system in Figure 1. CTX-471 binds to a unique epitope on CD137. (A) Binding traces from Bio-Layer Interferometry (BLI) experiments testing ability of recombinant CD137L to bind to preformed complexes of CD137 and stated antibody. (B) Max BLI response measured for binding of full-length or truncated forms of human CD137 to tested antibodies. (C) Mapping of identified contact residues for binding of CTX-471 (red) or urelumab (blue) onto crystal structure of human CD137/CD137L complex (PDB 6CPR) based on mutational analyses. (D and E) BLI measurements for binding of mouse CD137 truncations and point mutations to tested antibodies with mapping of identified contact residues onto crystal structure of the mouse CD137/CD137L complex (PDB 6MKZ). which clustering of CD137 induced by cross-linked antibodies causes the N-and C-termini of firefly luciferase to colocalize, interact, and refold into a functional enzyme. A molecular analogue of urelumab induced extensive, dose-dependent clustering, with single-digit nanomolar doses causing a 12-fold increase in signal over the isotype control ( Figure 3A). In comparison, CTX-471 induced moderate clustering of CD137, with a signal increase ranging from 3-to 11-fold ( Figure 3A). As CD137 signaling events culminate in NF-κB activation (28), we measured the ability of CTX-471 to elicit signaling in NF-κB-luciferase reporter cells and determined whether signaling was affected by CD137's natural ligand, CD137L. In the absence of the ligand, CTX-471, urelumab, and utomilumab induced NF-κB signaling in a dose-dependent manner when cross-linked ( Figure 3, B-D). In the presence of suboptimal concentrations of CD137L, the amount of NF-κB signaling induced by CTX-471 was enhanced, while activity of urelumab was largely unaffected and signaling from utomilumab was substantially decreased (Figure 3, B-D). The attenuated activity of utomilumab in the presence of CD137L is consistent with the observation that this antibody competes with ligand binding ( Figure 1A). CD137L binding similarly increased IFN-γ production from T cells treated with CTX-471 ( Figure 3E), while it had no effect on cells treated with urelumab or utomilumab (data not shown). CTX-471-AF monotherapy achieves high rates of complete tumor regression in multiple syngeneic mouse models. The mouse cross-reactivity and cross-functionality of CTX-471-AF and CTX-471 allowed the testing of these antibodies in syngeneic mouse tumor models in which the host animals have fully intact immune systems. In the CT-26 murine colon carcinoma model, both CTX-471-AF and CTX-471 demonstrated potent antitumor activity across a broad dose-range in mice bearing established (50-75 mm 3 ) tumors ( Figure 4A and Supplemental Figure 3A), resulting in a high rate of complete cures and significantly improved overall survival ( Figure 4B and Supplemental Figure 3B). At its most efficacious dose level of 150 μg, CTX-471-AF achieved complete tumor regressions in 100% of treated mice (8 of 8 mice) ( Figure 4A). Curative efficacy was also observed at the lowest tested dose level of 12.5 μg/mouse with both antibodies. Mice that experienced complete responses were rechallenged with a second inoculation of CT-26 tumor cells on the opposing flank 71-88 days after the last treatment. Complete protection from tumor regrowth was observed, indicating the establishment of long-term antitumor immunity ( Figure 4B and Supplemental Figure 3B). CTX-471-AF treatment also achieved complete cures across a wide dose range in the A20 B cell lymphoma and the orthotopic EMT6 triple-negative breast cancer models, leading to significantly improved overall survival and longterm protection from tumor rechallenge (Figure 4, C and D, and Supplemental Figure 3C). In order to understand the effect of epitope and in vitro potency on in vivo activity, we compared CTX-471-AF with the urelumab-like mouse surrogate antibody 3H3. At the higher dose levels, both CTX-471-AF and 3H3 achieved complete tumor regressions and significant extension of overall survival (Supplemental Figure 4A). At the lowest tested dose level of 12.5 μg/mouse, CTX-471-AF achieved complete tumor regressions in 7 of 8 mice (88%), leading to a significant benefit in overall survival (P = 0.0003) compared with control treated mice (Supplemental Figure 4A). In contrast, 3H3 led to tumor regression in 3 of 8 mice (43%) and did not significantly improve overall survival (Supplemental Figure 4A), suggesting that CTX-471-AF was more efficacious in vivo, despite significantly lower in vitro potency. The antitumor activity elicited by 3H3 was accompanied by significant splenomegaly, along with increased plasma levels of the proinflammatory cytokines TNF-α, IFN-γ, and IL-27 (Supplemental Figure 4, B-E). CTX-471-AF did not induce significant changes in these parameters at any dose level, suggesting that CTX-471-AF causes a weaker systemic activation of the immune system than 3H3 (Supplemental Figure 4, B-E). CTX-471-AF selectively and profoundly reprograms the tumor microenvironment. Across a wide dose range, treatment with CTX-471-AF induced a profound influx of CD45 + leukocytes within the majority of CT-26 tumors when compared with tumors from isotype control-treated animals ( Figure 5A). While the overall proportions of intratumoral CD8 + and CD4 + T cells did not change significantly in response to CTX-471-AF ( Figure 5, B and C), the observed expansion of the CD45 + compartment indicates an increase in the total number of tumor-infiltrating T cells. There was also an expansion of antigen-positive T cells in the tumor, as measured by AH-1 pentamer staining (Supplemental Figure 5A). Conversely, the level of immunosuppressive Tregs within tumors decreased significantly at all tested doses of CTX-471-AF, as compared with isotype control treatment ( Figure 5D). Among tumor infiltrating leukocytes, CD137 is most highly expressed on Tregs (23), which may render these cells uniquely susceptible to depletion by CTX-471-AF through antibody-dependent cell-mediated phagocytosis (ADCP). Treatment with CTX-471-AF was associated with significantly decreased coexpression of the immunoinhibitory exhaustion markers TIGIT and PD-1 on the surface of intratumoral CD8 + and CD4 + T cells, indicating that CTX-471-AF therapy either protected tumor-infiltrating T cells from exhaustion or reversed the exhausted phenotype ( Figure 5, E and F). Tumor-infiltrating lymphocytes (TILs) in CTX-471-treated mice also demonstrated increased metabolic uptake and mitochondrial activity as reported previously (29, 30) (Supplemental Figure 5B). The proportion of CD11b + F4/80 + macrophages was reduced in the tumors of mice treated with CTX-471-AF ( Figure 5G). The macrophages remaining within the tumor microenvironment following treatment displayed a repolarization to an antitumor M1 phenotype, as evidenced by higher expression of the M1 markers CD38 and iNOS ( Figure 5H). NK cells in the tumor shifted to a more mature or cytotoxic phenotype with increased expression of CD11b (Supplemental Figure 5C). Distinct from the robust immunophenotypic changes observed within tumors following CTX-471-AF treatment, the effects on peripheral immunophenotypes were much milder. CTX-471-AF did not cause any major change in the numbers or proportions of effector T cells, Tregs, or macrophages in the spleen (Supplemental Figure 6, A-D, and H). Counter to what was observed in tumors, the frequency of TIGIT and PD-1 coexpression on CD4 + and CD8 + T cells was increased within spleens, along with an increased frequency of CD8 + T cells possessing a CD44 + CD62Leffector memory phenotype (Supplemental Figure 6, E-G). In contrast to CTX-471-AF, treatment with 3H3 resulted in substantial changes in the peripheral immune system, including significant expansion of liver CD8 + T cells at all tested dose levels (Supplemental Figure 7, A-D). 3H3 also dramatically increased the frequency of PD1 + TIGIT + T cells in the spleens and livers of treated mice, compared with more mild changes in the CTX-471-AF-treated animals (Supplemental Figure 7, E-H). Together, these data suggest that the stronger agonistic signaling from 3H3 may be associated with broader T cell activation and exhaustion compared with the more moderate signaling from CTX-471-AF. Consistent with better elicitation of effector T cell functions after 3H3 and CTX-471-AF treatment, we observed elevated enrichment of pathways such as T cell activation in clusters 0 and 2 compared with cluster 1 (Supplemental Figure 8B). We further quantified the expression of a memory-precursor signature (31), an effector T cell signature (31), and an exhaustion/dysfunction signature (32) in the CD8 + TILs across the 3 groups ( Figure 5K). We saw that the CD8 + TILs from both CTX-471-AF-and 3H3-treated mice showed more memory-precursor/stem-like cells relative to the CD8 + TILs from isotype-treated controls ( Figure 5K). Furthermore, although the CD8 + TILs from both CTX-471-AF-and 3H3-treated mice showed acquisition of effector and exhaustion gene programs, significant differences in the proportion of cells expressing these programs indicated that CTX-471-AF treatment does not push cells as hard toward exhaustion/dysfunction as 3H3 treatment; instead, it sustains an intermediate level of effector function. The antitumor activity of CTX-471 is driven by FcγR engagement and requires the coordinated involvement of both T cells and NK cells. To identify the relative contribution of specific immune effector cell types to the antitumor activity of CTX-471, we treated CT-26 tumor-bearing mice with CTX-471 in the context of immune subset depletion. Antibody-based depletion of either CD8 + T cells, CD4 + T cells, or NK cells resulted in a nearly complete loss of antitumor activity ( Figure 6, A and B). These observations indicate that the tumor rejections induced by CTX-471 therapy require the coordinated activity of both innate and adaptive immune effector cells. Human IgG4 antibodies bind to mouse Fc receptors FcγRI and FcγRIIb through the Fc domain, albeit with lower affinity compared with the corresponding human receptors (25,33), and are able to induce ADCP by mouse macrophages (34). To explore the contribution of FcγR binding on antitumor activity, a set of CTX-471 variants was engineered with different isotypes varying in Fc receptor engagement and effector function. Consistent with previous findings, treatment with CTX-471 and CTX-471-AF resulted in potent antitumor activity that resulted in complete tumor regressions in all mice and 100% overall survival ( Figure 6C). trating Tregs on day 9 following administration of CTX-471 with hIgG4 or rIgG2a isotype on days 0, 3, and 6. Statistical significance was determined using Log-rank test (C) or 1-way ANOVA (D) followed by Bonferroni's multiple comparison test compared with control treatment groups (*P<0.05, **P<0.01, ****P<0.0001). All data presented as mean ± SEM. A variant of CTX-471 with point mutation N303A (equivalent to N297A in ref. 35) that prevents Fc glycosylation and Fc/Fcγ receptor binding showed severely reduced efficacy that led to cures in only 1 of 8 mice (13%), highlighting the importance of FcγR-mediated clustering. Clone 3H3 is a rat IgG2a antibody; therefore, a series of experiments was performed comparing isotype-matched versions of CTX-471 and 3H3 to better define the relative effects of binding domain and Fc isotype on their differentiated activity. Rat IgG2a Fc domains bind preferentially to inhibitory Fc receptors in the mouse, mediating Fc clustering but with diminished ability to induce ADCC/ADCP in mice compared with human IgG4 (36). 3H3-hIgG4 behaved similarly to 3H3-rIgG2a in the CT-26 model, leading to significantly higher peripheral activation compared with CTX-471, as measured by systemic proinflammatory cytokines and T cell infiltration and activation/exhaustion in the liver and spleen (Supplemental Figure 9). CTX-471 also demonstrated better antitumor activity compared with the isotype-matched 3H3-hIgG4, demonstrating that the wider therapeutic window of CTX-471 compared with 3H3 is primarily a product of its differentiated binding epitope and affinity (Supplemental Figure 9). In contrast, the human IgG4 isotype was critical to the ability of CTX-471 to reduce Tregs in the tumor, as a rat IgG2a variant of the molecule had no effect ( Figure 6D). Similarly, CTX-471-rIgG2a did not reduce the frequency of PD1 + /TIGIT + T cells in the tumor, as observed for CTX-471-hIgG4 (Supplemental Figure 9). CTX-471-rIgG2a was still highly effective in the CT-26 tumor model, with 6 of 8 cures, but it did not match the 8 of 8 cures of the hIgG4 version ( Figure 6C). Collectively, these data suggest that the antitumor activity of CTX-471 critically depends on Fc receptor-mediated clustering, with Fc-mediated depletion of Tregs playing a small but complementary role, and that CTX-471 -as a human IgG4 -represents an optimal combination of binding domain and Fc isotype. High doses of CTX-471 and CTX-471-AF do not induce hepatic inflammation. The first CD137 antibody to enter the clinic, urelumab, demonstrated dose-dependent moderate-to-severe hepatic toxicity (14). Urelumab is not mouse cross-reactive; however, the mouse-specific anti-CD137 agonist antibody 3H3, which binds to a similar epitope as urelumab, has been shown to cause hepatic inflammation in mice (18)(19)(20). To assess the potential of CTX-471, CTX-471-AF, and 3H3 to induce liver inflammation, we administered up to 80 mg/kg of these antibodies to nontumor-bearing mice by weekly i.v. injections. At the end of the 4-week dosing period, 3H3 treatment induced a mild but significant increase in spleen and liver weights, whereas CTX-471 and CTX-471-AF had no effects (Supplemental Figure 10, A and B). Serum levels of transaminases (e.g., ALT and AST) were increased with 3H3 but not with CTX-471 or CTX-471-AF ( Figure 7A and Supplemental Figure 10C). 3H3-induced hepatic inflammation has been shown to involve CD137-mediated activation of liver-specialized macrophages known as Kupffer cells, leading to the production of proinflammatory cytokines including IL-27 and TNF-α that, in concert with autoantigen presentation by the Kupffer cells, triggers a CD8 + T cell-driven autoimmune hepatitis (18). Consistently, we observed elevated plasma levels of proinflammatory cytokines TNF-α ( Figure 7B), expansion of CD8 + T cells in livers and spleens (Figure 7, C-F, and Supplemental Figure 10D Treatment of murine BM-derived macrophages with 3H3 in vitro significantly increased expression of both IL-27 and TNF-α, while CTX-471 treatment had no measurable effect and CTX-471-AF induced a mild but nonsignificant increase ( Figure 7J). 3H3 treatment also increased production of IL-6 and IL-1β by CpG-stimulated murine macrophages (Supplemental Figure 10, E and F). To confirm this observation with primary human cells, peripheral blood monocytes from healthy human donors were matured to macrophages Statistical significance was determined using Log-rank test (A and C) followed by Bonferroni's multiple in vitro and stimulated in a similar fashion with CpG and anti-CD137 antibodies. Human macrophages treated with CTX-471 did not show any changes in TNF-α production, whereas macrophages treated with urelumab had increased TNF-α and IL-27 production ( Figure 7K and Supplemental Figure 10G). CTX-471-AF has a unique ability to cure mice of large tumors. Considering the dramatic efficacy observed with CTX-471 and CTX-471-AF in the CT-26 tumor model, we decided to increase the stringency and therapeutic relevance of the model by allowing tumors to grow to ~500 mm 3 before starting treatment. We first compared the efficacy of CTX-471 with CTX-471-AF and CTX-471-AF2, a higher affinity variant that binds mouse CD137 with monovalent K D of 10 nM. All 3 versions of the antibody showed an ability to completely regress large CT-26 tumors and significantly extend overall survival ( Figure 8A). CTX-471-AF performed the best, achieving a 100% complete regression rate, suggesting that an intermediate affinity for CD137 is optimal for antitumor activity. Consistent with prior studies, all cured mice showed durable protection from tumor rechallenge (Supplemental Figure 11). Comparison studies were performed against anti-CD137 clone 3H3 and a panel of well-characterized immuno-oncology antibodies targeting PD-1, PD-L1, CTLA-4, or OX40 that have demonstrated activity against smaller CT-26 tumors in the literature (37)(38)(39)(40). In the context of large CT-26 tumors (~450 mm 3 ), however, these antibodies all failed to induce complete tumor regressions, substantially reduce tumor growth, or enhance overall survival (Figure 8, B and C). In contrast, CTX-471-AF induced complete tumor regression in 4 of 6 mice. Antitumor efficacy of CTX-471-AF was associated with generation of an extensive necrotic core in the tumor surrounded by inflammatory granulation ( Figure 8D). Unlike control tumors, where CD8 + T cells were low in number and confined to the periphery, CTX-471-AF treatment induced CD8 + T cell expansion and infiltration ( Figure 8E). Discussion Immunomodulation via antibody-mediated agonism of TNFR superfamily members is a promising strategy for treating cancer either alone or in combination with checkpoint blockade (1). As a potent and inducible costimulator of both cytotoxic T cells and NK cells, CD137 was among the first TNFR to be identified as a target for cancer immunotherapy (2). The disparate safety and pharmacology profiles of the 2 anti-CD137 antibodies that have entered the clinic, urelumab and utomilumab, offer a striking example of how differences in epitope, FcγR interactions, ligand-blocking activity, and antibody affinity can influence the activity of TNFRSF agonists (14,15). By optimizing these attributes, we have generated a potentially novel CD137 agonist with an improved efficacy/safety profile. CTX-471 is a fully human, hinge-stabilized IgG4 antibody that binds to a unique epitope within CD137 and depends on Fc/FcγR interactions to deliver a well-tuned agonistic signal. To generate data in mice that would most closely recapitulate pharmacokinetics and pharmacodynamics in humans and cynomolgus monkeys, we additionally generated an affinity matured version (CTX-471-AF) that binds mouse CD137 with comparable affinity as CTX-471 binds to the human receptor. As monotherapies, both CTX-471 and CTX-471-AF exhibited extremely potent antitumor activity in multiple syngeneic mouse models, achieving high rates of complete cures at very low-dose levels. In total, 100% of mice that rejected their original tumors were fully protected from tumor rechallenge, indicating that CTX-471 promoted long-term immune memory, consistent with the known role of CD137 in expanding memory T cells in vivo. CTX-471-AF also demonstrated a unique ability to cure large, established CT-26 tumors (450-500 mm 3 before treatment), while validated antibodies against other immuno-oncology targets including PD-1, PD-L1, CTLA-4, or OX-40 failed to slow tumor growth. Importantly, the urelumab-like, mouse-specific CD137 agonist antibody 3H3 was also ineffective in the large tumor model, highlighting the improved in vivo efficacy of CTX-471-AF despite lower in vitro potency. Immune cell depletion experiments showed that the efficacy of CTX-471 required the presence of CD4 + T cells, CD8 + T cells, and NK cells, indicating a coordinated involvement of both innate and adaptive immune cells. As reported for other TNFR agonist antibodies (20)(21)(22)(23)(24), activity of CTX-471 both in vitro and in vivo requires the presence of FcγR-expressing accessory cells to promote receptor clustering. The selection of a human IgG4 backbone enables coengagement of CD64 (FcγRI) and CD32b (FcγRIIb) comparison compared with control treatment groups (*P<0.05, **P<0.01). (D and E) Histological analysis of CT-26 tumors on days 7, 10, and 14 following i.p. administration of 25 μg/mouse CTX-471-AF or isotype control on day 0. Formalin-fixed paraffin-embedded tissues were stained for H&E (D) or with antibodies against CD8 (E). Scale bars: 200 μm. to drive CD137 cross-linking, while limiting the risk of deleterious depletion of CD137 + immune effector cells through binding CD16a (FcγRIIIa). In aggregate, our mechanistic observations indicate that the curative activity of CTX-471 requires not only the presence of CD137-expressing innate and adaptive immune effectors, but also the presence of FcγR-expressing cells. This codependency provides a good explanation for the tumor-selective activity of CTX-471, as the frequency of both CD137 + TILs and macrophages that constitutively express FcγRs have been shown to be higher within tumors as compared with normal tissues (41)(42)(43)(44). At baseline, a high proportion of T cells within CT-26 tumors showed coexpression of the immunoinhibitory receptors PD-1 and TIGIT, a characteristic associated with impaired effector function (45)(46)(47). After treatment with CTX-471 or CTX-471-AF, the frequency of intratumoral PD-1 + TIGIT + T cells decreased significantly, with a more modest effect from 3H3. In the spleen and liver, CTX-471-AF modestly increased the frequency of PD-1 + TIGIT + T cells, while 3H3 treatment strongly increased peripheral T cell activation and exhaustion. Similarly, our single-cell gene expression analysis of isolated CD8 + TILs showed that 3H3 treatment drove a significantly higher proportion of these cells toward exhaustion/dysfunction than did treatment with CTX-471-AF. Together, these observations indicate an altered effector T cell differentiation trajectory elicited by CTX-471, as strong agonists like 3H3 potentially overstimulate effector CD8 + T cells and promote terminal effector differentiation and exhaustion/dysfunction, whereas the welltuned agonism delivered by CTX-471 appears to sustain a less exhausted/dysfunctional effector phenotype. By virtue of its unique, nonligand competitive epitope, CTX-471 may synergize with natural CD137/ CD137L signaling. By engaging CD137 with antibody and ligand at the same time, it may be possible to generate larger clusters of the receptor, thereby driving enhanced signaling as reported for other TNFR antibodies (27). Additionally, CD137L expressed on myeloid cells can back signal following CD137 engagement, leading to DC maturation and migration, expression of proinflammatory cytokines and costimulatory receptors, and promotion of a Th1 T cell response in human cellular systems (48), although the effects are less clear in mice (49). Lastly, CD137L back signaling in T cells can limit T cell activation, an effect that is reversed by CD137 on T cells downregulating the ligand in cis (50). Given these different effects, it is possible that CD137L back signaling may act to tune the primary immune response by promoting myeloid and T cell activation in a proinflammatory environment but restricting T cell activation in a low-antigen environment (51). A nonligand competitive antibody such as CTX-471 should retain these natural interactions and regulatory mechanisms, which may be disrupted with a ligand-competitive antibody like utomilumab. As previously discussed, urelumab showed significant hepatic toxicity in early clinical studies, and similar hepatotoxicity has been recapitulated in mice with 3H3 (18)(19)(20). Other strong agonist antibodies against mouse CD137, including the ligand-blocking clone 2A and nonblocking clone 1D8, also induce liver inflammation, while weaker and FcγR-dependent clone LOB12.3 has no observed hepatotoxicity in mice (20,52). Mechanistic studies using 3H3 have implicated CD137-mediated activation of liver-resident macrophages, known as Kupffer cells, as the inciting event that leads to a CD8 + T cell-driven autoimmune hepatitis (18). In our studies, repeated high doses of CTX-471 or CTX-471-AF did not cause significant changes in hepatic inflammation parameters, whereas 3H3 triggered liver inflammation with induction of plasma cytokines (TNF-α, IFN-γ, IL-27) and the hepatic stress markers ALT and AST. This differential effect may be explained by macrophages having a higher threshold for CD137-mediated activation than T cells. Additionally, the liver microenvironment may lack features required for CTX-471 to exert its full agonistic effect, such as sufficient density of FcγR-expressing cells. Limitations of our studies include potential differences in the structure and function of CD137 in mice and humans, as well as differences in the expression and function of murine and human FcγRs. While human IgG4 binds both human and mouse FcγRI and FcγRIIb, there are differences in absolute binding affinity and the extent of effector function across species (25,34). In conclusion, CTX-471 and CTX-471-AF exhibited extremely potent antitumor activity as monotherapy, resulting in durable cures with generation of protective immune memory. The unique ability of CTX-471-AF to completely regress large tumors may represent an unprecedented level of preclinical efficacy for a single-agent immunotherapy. The ability of CTX-471/CTX-471-AF to selectively and profoundly reprogram the tumor microenvironment while sparing unwanted systemic effects indicates that it is an optimally tuned CD137 agonist owing to its unique epitope, requirement for FcγR coengagement, and optimized affinity. These attributes of CTX-471 may form a paradigm for the design of optimized agonists targeting other TNFRSF members. Based on our encouraging preclinical findings, CTX-471 has entered clinical development and is currently in a phase I clinical study in patients with solid tumors. Protein expression and purification Antibodies used in the study were generated by cloning DNA encoding both the light and heavy chain sequences independently into the multiple cloning site of the mammalian expression vector pcDNA3.4 (Thermo Fisher Scientific). Sequences for control antibodies, including urelumab and utomilumab, were obtained from the TABS Therapeutic Antibody Database (https://tabs.craic.com) or from PDB files 6MHR and 6A3W. Human IgG4 isotype antibodies were generated by fusion of DNA encoding the desired light variable region to the human κ constant region (Uniprot P01834) and the desired heavy variable region to the human IgG4 constant region (Uniprot P01861) modified with the S228P mutation to prevent chain shuffling (26). The aglyco-IgG4 variant was generated by mutating the asparagine equivalent to asparagine 297 in human IgG1 (35) to alanine, using standard molecular biology methods. The rat IgG2a isotype antibody was generated by replacing the human constant domains with sequences for rat IgG2a heavy chain and rat κ light chain (Uniprot P20760 and P01836). Subsequently, plasmid DNA for the corresponding light and heavy chains were cotransfected into Expi293 cells (Thermo Fisher Scientific) and incubated for 5 days, and the culture supernatant was harvested. Antibodies were isolated from the resultant supernatants by affinity capture using MabSelect SuRe (GE Healthcare, catalog 11003494), refined by size exclusion chromatography as required and buffer exchanged into PBS, pH 6.5. Epitope mapping For epitope mapping, a scanning saturation library of CD137 mutants was synthesized with single-point mutations at all noncysteine residue positions to every possible amino acid substitution except cysteine (Twist Biosciences). The library of CD137 variants was displayed on the surface of HEK cells with a single variant per cell and simultaneously stained for binding of nonoverlapping antibodies CTX-471 and urelumab directly conjugated with Alexa Fluor 488 and Alexa Fluor 647 (Thermo Fisher Scientific), respectively. Populations of cells with reduced binding to one antibody but not the other were selected on a BD FACSAria Fusion. DNA from sorted populations were analyzed by NGS using an Illumina MiSeq 2 × 300 bp platform to identify mutations enriched in each population. Selected CD137 mutants were cloned into His-tagged expression vectors and produced solubly in HEK cells. His-tagged variants were captured directly from the supernatant onto Ni-NTA Octet tips and binding measured to 100 nM CTX-471, urelumab, or CD137-L on Octet HTX (ForteBio). Identified mutations were highlighted on published CD137/CD137L cocrystal structures for human (6CPR) and mouse (6MKZ) using Discovery Studio Visualizer (BioVia). T cell isolation and preparation Human peripheral blood mononuclear cells (PBMCs) were isolated from leukopaks (Stemcell Technologies, catalog 70500) using standard PBMC isolation techniques and frozen down as aliquots in Cryostor-CS10 (Stemcell Technologies, catalog 07930) until the day before an experiment. One day before an assay, PBMCs were thawed and rested overnight in T cell media (TCM). For functional assays, total T cells were isolated using an EasySep Human T Cell Isolation Kit (Stemcell Technologies, catalog 17951). For mouse T cell assays, BALB/c mouse (Charles River Laboratories, strain BALB/cAnNCrl) spleens were perfused with Sorting Buffer (PBS + 1% FBS + 2 mM EDTA) to isolate splenocytes. The isolated splenocytes were passed through a 70-μm nylon mesh filter and then directly used for CD8 + T cell isolation (Miltenyi Biotec, catalog 130-104-075) without RBC lysis. For Cynomolgus macaque T cell assays, cynomolgus PBMCs (IQ Biosciences, catalog IQB-MnPB102) were thawed, resuspended in TCM at 2 × 10 6 /mL, and then activated by adding anti-CD3 (Mabtech, catalog 3610-1-50) at 1 μg/ mL for 3 days. After 3 days, the expanding cells were cultured in TCM containing 5 ng/mL hIL-2 (BioLegend, catalog 589106) and 2.5 ng/mL IL-7 (BioLegend, catalog 581904) for an additional 8-12 days, with media changes every 2-3 days, before being switched back to cytokine-free TCM the day before setting up an assay. T cell activation assays For in-culture functional studies with primary human T cells, 100,000 total T cells were cocultured with 50,000 ExpiCHO-S cells (CHO, Thermo Fisher Scientific, catalog A29127) lentivirally transduced to express human CD32b (CHO-CD32b). Anti-CD137 or isotype control antibodies were added at indicated concentration, along with 0.25 μg/mL anti-human CD3 antibody (clone OKT-3, Thermo Fisher Scientific, catalog 16-0037-8). After a 3-day coculture, supernatants were tested for secreted IFN-γ using an electrochemiluminescence assay (Meso Scale Discovery, catalog K151AEB-4). In experiments testing the effect of CD137L on antibody activity, 0.5 μg/mL CD137-Fc was added to each well. For in-culture functional studies with primary mouse and cynomolgus monkey T cells, a similar procedure was employed with the following deviations. Mouse T cells were cocultured with K562-CD32 cells instead of CHO-CD32 due to an observed ability of CHO cells to provide endogenous costimulation to mouse T cells. The anti-mouse CD3 antibody (clone 145-2C11, BioLegend, catalog 100331) and the mouse IFN-γ detection kit (Meso Scale Diagnostics, catalog K152AEB-2) were used. Cynomolgus T cells were activated in the cocultures using anti-monkey CD3 antibody (clone CD3-1, mAbTech, catalog 3610-1-50) and IFN-γ measured with a nonhuman primate detection kit (Meso Scale Diagnostics, catalog K156QOD-1). Receptor occupancy (RO) Primary CD8 + T cells from humans and mice or total T cells from cynomolgus monkeys were activated with CD3 antibodies as described above. Total T cells from cynomolgus monkeys were further stained with CD8 (BioLegend, catalog 301032) and CD4 (eBioscience, catalog 12-4998-82) antibodies and gated on CD8 + prior to analysis. Activated T cells were incubated with serial dilutions of CTX-471 or CTX-471-AF (0.003-100 nM) overnight, and binding was detected with a secondary antibody against human IgG (eBioscience, catalog 12-4998-82). Staining with saturating concentrations of CTX-471 or CTX-471-AF was used to establish the mean fluorescence intensity (MFI) with 100% RO (MFI 100% RO ). Binding MFI of an isotype control antibody was considered background and subtracted from all values. Percent RO was calculated using the background subtracted values as % RO = 100 × (MFI Sample /MFI 100% RO ). In parallel, a noncompetitive antibody was used to determine total CD137 levels. Clustering and signaling assays HEK-SplitCD137 cells were generated by transducing HEK-293 cells (American Type Culture Collection [ATCC], catalog ATCC-CRL-1573) with lentiviral constructs expressing human CD137 fused to either amino acids 2-416 of firefly luciferase with a neomycin selection cassette or amino acids 398-550 of firefly luciferase with a puromycin selection cassette. Cells stably expressing both fusion proteins were selected by culturing in 1 μg/mL puromycin and 100 ng/mL neomycin for 2 weeks. For measuring CD137 clustering, CD137 and isotype control antibodies were incubated for 30 minutes with the anti-human F(Ab') 2 reagent (Jackson ImmunoResearch, catalog 109-006-098) to induce cross-linking. A total of 50,000 HEK-SplitCD137 cells were incubated with serial dilutions of the cross-linked antibodies for 4 hours. The cells were lysed with Bright-Glo substrate (Promega, catalog E2620), and the resulting luminescence signals were measured using the BioTek Synergy H1 microplate reader. For measuring CD137-driven NF-κB signaling, HEK-293T-NF-κB-CD137 cells were obtained from CrownBio (catalog C2012). A subclone designated 1E17 was selected for optimal signaling properties and used in these assays in the presence or absence of plate-bound 10 nM CD137L-Fc (Sino Biological, catalog 15963-HO1H) or an isotype control. A total of 50,000 HEK-293T-NF-κB-CD137 cells were incubated for 4 hours with serial dilutions of the CD137 or isotype control antibodies. The cells were then lysed by addition of 100 μL per well SteadyLite plus substrate buffer (Perkin Elmer, catalog 6066751), and the resulting luminescence signals were measured using the BioTek Synergy H1 microplate reader. Syngeneic tumor models Cell lines, tumor formation, and tumor monitoring. CT-26, A20, and EMT6 cell lines were obtained from ATCC. CT-26 colon carcinoma cells were cultured in DMEM containing 10% heat-inactivated FBS and maintained at 37°C and 5% CO 2 . For culturing EMT6 breast carcinoma cells, the medium was further supplemented with 1 mM sodium pyruvate solution, 1× NEAA solution, and 1× MEM vitamin solution. A20 B cell lymphoma cells were cultured in RPMI 1640 media, containing 10% heat-inactivated FBS, 10 mM HEPES, 1 mM sodium pyruvate, and 0.05 mM 2-mercaptoethanol. All cell lines were tested and verified to be murine virus and mycoplasma free prior to in vivo implantation. BALB/c mice were obtained from Charles River Laboratories and were 6-9 weeks old at the beginning of study. A total of 1 × 10 5 CT-26 cells or 5 × 10 6 A20 cells were injected s.c. into the right flank in 0.1 mL PBS. A total of 5 × 10 4 EMT6 cells were injected into the right mammary fat pad in 0.05 mL PBS. Mice were randomized into groups of 6-10, and dosing was initiated 7-14 days after tumor cell inoculation when the tumors reached a predetermined volume (50-100 mm 3 for standard tumor models and 450-500 mm 3 for large tumor models). Tumor width and length were measured using dial calipers, and tumor volumes were calculated by the formula length × (width 2 )/2. Mice were euthanized when the tumor size reached the humane endpoint (2000 mm 3 for CT-26 and A20 tumor-bearing mice; 1000 mm 3 for EMT6 tumor-bearing mice). Mice with no palpable evidence of tumors for 60-90 days after termination of treatment were considered cured and were subsequently rechallenged with the same tumor cells of the on the opposite (left) flank. As control, 5 naive mice were also inoculated in the same manner. Single cell suspensions were stained with fixable viability dye (eBioscience, catalog 65-0865-18) and Fc receptors blocked with anti-CD16/CD32 (eBioscience, catalog 13-0161-86). For cell surface staining, single cell suspensions were incubated with antibody cocktails for 30 minutes on ice. Intercellular staining was performed using the FoxP3/transcription factor staining buffer set following the manufacturer's fixation/ permeabilization protocol (eBioscience, catalog 00-5523-00). The optimal concentration for each antibody was predetermined by titration. Differential gene expression analysis of CD8 + TILs CD8 + T cells were sorted from CT-26 tumor-bearing mice that were treated with CTX-471-AF, 3H3, or isotype control antibodies and were encapsulated into droplets. Libraries were prepared using Chromium Single Cell 3′ Reagent Kits v2 according to the manufacturer's protocol (10× Genomics). The generated scRNA-Seq libraries were sequenced using a 75-cycle Nextseq 500 high-output V2 kit. Read demultiplexing, mm10 reference alignment, filtering, Unique Molecular Identifier-collapsing (UMI-collapsing), and expression matrix generation was performed using Cellranger 2.1.0 (10× Genomics). Cells with fewer than 400 genes detected or genes expressed in fewer than 3 cells were removed from analysis. UMI counts were normalized to account for differences in coverage such that each row (cell) in the expression matrix adds to 10,000. Data was then log-transformed log 2 (TPM + 1) and scaled to a mean of 0 with unit variance for further analysis using Scanpy (Version 1.4). Data were deposited in the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/) with accession number GSE144473. For data visualization, t-distributed stochastic neighbor embedding (t-SNE), as implemented in Scikit-learn (http://www.jmlr.org/papers/v12/pedregosa11a.html), was applied with a perplexity of 30 calculated from the first 20 principal components as evaluated on highly variable genes. To cluster cells, the Louvain graph clustering method was implemented by https://zenodo.org/record/35117 with resolution parameter 0.6 computed on a nearest neighbors graph with k = 40 nearest neighbors. Two clusters enriched for DC and B cell signatures were removed and reclustered on the nearest neighbors graph with k = 40 neighbors and perplexity of 0.6 again. For each signature (a set of genes), each cell was scored by its average expression across the signature list, subtracted from 50 randomly subsampled genes with similar expression as defined by 25 gene expression bins using the "score_genes" function as implemented in Scanpy. For signatures with both upregulated and downregulated genes, scores were calculated and separated, and the final score was down-signature subtracted from the up-signature. A 1-way ANOVA test of expression between samples was applied to evaluate statistical shifts in signature enrichment. In all signature comparisons, P values from the F-distribution were highly significant. Additionally, Tukey's pairwise range test was used to evaluate differences of enrichment between samples. To control for FDR, the Benjamini-Hochberg procedure was used as implemented by the Python stats models package with α = 0.05. To calculate the significance of gene expression between groups of cells, Benjamini-Hochberg-corrected P values from a 2-tailed t test with intentionally overestimated variance were used to reduce the likelihood of false positives as implemented in Scanpy's "rank_genes_groups" function. Assessment of hepatic inflammation Potential for CD137 agonist-induced hepatic inflammation was assessed in a dose-ranging safety pharmacology study in nontumor bearing C57BL/6 mice. Mice received a citrate buffer negative control, CTX-471, CTX-471-AF, or antibody 3H3 by weekly i.v. bolus injections on days 0, 7, 14, and 21. CTX-471 and CTX-471-AF were tested at 10, 20, 40, and 80 mg/kg dose levels, and antibody 3H3 was tested at 10 and 80 mg/kg. Livers and spleens were harvested and weighed on day 28. Intrahepatic and splenic CD8 + T cell levels were measured by flow cytometry as described above. Plasma levels of liver transaminase enzymes were measure using ALT activity assay (MilliporeSigma, catalog MAK052) and AST activity assay (MilliporeSigma, catalog MAK055) kits. Plasma levels of proinflammatory cytokines were measured with custom multiplexed electrochemiluminescence assay (Meso Scale Discovery) in accordance with the manufacturer's protocol for evaluation of mouse cytokine levels in plasma. For histological analysis, liver lobes were collected in histology cassettes and submerged into 10% neutral buffered formalin for 24 hours prior to transferring into 70% ethanol for long-term storage. Tissues were than embedded in paraffin, sectioned, and stained with H&E. Immunohistochemical staining with anti-mouse CD8 (Cell Signaling Technologies, catalog 98941) and anti-mouse F4/80 (Cell Signaling Technologies, catalog 70076) antibodies were performed on serial sections using standard methods. Numbers of CD8 + cells, F4/80 + cells, or F4/80 + clusters (defined as > 10 cells) were counted using the QuPath software (version 0.1.2). Macrophage differentiation and activation For differentiation of macrophages, unfractionated mouse BM (femurs and tibias) or human CD14 + peripheral blood monocytes were utilized. CD14 + monocytes were isolated using ferromagnetic bead separation (Miltenyi Biotec). Cells were matured in vitro to macrophages using either mouse or human M-CSF (Shenandoah Biotechnology) at 20 ng/mL for a period of 7 days. At the end of maturation culture, adherent macrophages were harvested and reseeded in 96-well plates and rested overnight in minimal media without additional cytokines. Cells were then reactivated with 10 μg/mL CpG ODN (multispecies specific D-SL01, Invivogen) and the indicated antibodies for a period of 2 days. Following reactivation, cytokine production in the supernatant was analyzed using a multiplexed electrochemiluminescence-linked immunosorbent assay (Meso Scale Discovery). Data and materials availability Requests for materials should be addressed to TS and will be provided pending completion of a material transfer agreement with Compass Therapeutics Statistics For in vivo studies, the statistical significance of differences between treatment groups was calculated using GraphPad Prism (version 7). Log-rank test followed by Bonferroni's multiple comparisons test was used to determine statistical significance between Kaplan-Meier survival curves. For all other data, 1-way ANO-VA with Bonferroni's multiple comparisons test was performed. Significance were indicated as follows: *P<0.05; **P<0.01; ***P<0.001; ****P<0.0001. All data are represented as mean ± SEM. Study approval All animal studies were performed according to the guidelines of the IACUC at Compass Therapeutics. Author contributions UE designed in vivo pharmacology studies, oversaw the execution of all in vivo experiments, and interpreted the data. WG performed in vivo experiments and immunophenotyping studies. BW designed and executed in vitro functional assays and interpreted the data. CC performed in vivo studies. LM performed in vivo studies. HJW performed in vivo studies. MO designed and performed in vitro signaling studies. CL analyzed scRNA-Seq data. CW oversaw analysis of scRNA-Seq data. PB performed in vitro functional assays. DCG performed in vitro cell binding assays. SO performed biochemical binding studies. LL designed and oversaw biochemical binding studies. WKM expressed and purified truncated variants of CD137. SQH cloned and prepared DNA encoding truncated variants of CD137. CLL designed and executed epitope mapping studies. JL designed and executed affinity maturation experiments and performed structural analyses. WFC designed and performed in vitro macrophage assays. NZ led the early functional characterization efforts. ACA supervised the scRNA-Seq analysis methods and interpreted the data. MMS led the antibody discovery and engineering research efforts. PB provided overall scientific critique and guidance. TJS provided overall scientific critique and guidance. RT led the pharmacology research efforts, provided overall guidance, and interpreted the data. All authors contributed to the drafting and critical revision of the manuscript and approved the final version of the manuscript.
2020-03-12T10:34:56.897Z
2020-03-12T00:00:00.000
{ "year": 2020, "sha1": "6ad7b7aa70d43e0bf9d93eae38103982cc506689", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/133647/files/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29c343005c65e922eb84d0d6ea1d487dae986f08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232081579
pes2o/s2orc
v3-fos-license
Inclusive schools: Are teachers adequately prepared for inclusion? This article will discuss one of the main topics on the educational and social agendas in Israel. Integrating children and adults with special needs into schools and the community is a worldwide issue. Many researchers have tried to find and evaluate the most effective integration methods, to assist people with special needs and enable them high quality of life and equality. In this article, we will look at the process of integrating students with special needs and the transition that took place during the last few decades regarding the idea of “inclusion”, which is now a top priority for the Ministry of Education’s directors. Based on recent studies, we will examine whether school teaching staff and student teachers are ready to implement inclusive programs in schools as required. We will then propose ways to optimize the training of the educational staff, towards the implementation of the inclusive programs. Introduction Integration and inclusion of children and adults with special needs in schools and the community are two issues on the educational and social agenda of many countries, including Israel. Many researchers are examining different ways of integration, to find the most effective way of integration for people with special needs both in facilitating the integration process and in the success of the integration, manifested in improving the quality of life and equality for those integrated. In this article, we will examine the process of integrating students with disabilities in Israel since the enactment of the "Special Education Act" in 1988 and up to the formulation of the "inclusion" concept, which is favored by the Ministry of Education and is still being implemented. In this paper, we will review the following issues: -When did the discourse on "inclusion" come to the forefront of educational priorities? Equally important, what was the founding event that initiated it? -What is the added value of inclusion over integration? -We will discuss the ability of teachers and student teachers to implement the school inclusion programs, according to the 11 th amendment of the Special Education Act. -Based on a literature review, we will present a model for preparing student teachers to teach according to the 11th amendment of the aforementioned law. Integration in Israel -historical review The philosophical concept underlying the idea of integration is that a child with special needs has the same rights as a child without such needs; therefore, he has the basic right to study together with his peers within the same educational system. Studies suggest that separating a child with special needs from his peers will inevitably result in future difficulties. Separating a child or an adult with special needs from the rest of society is a discriminatory action, designated to make the lives of "ordinary" people easier. 1 Excluding the "weak" people, allegedly to protect them, indicates discrimination and not consideration. Referral ofhildren with special needs to special education shifts the responsibility of care from the regular school, to the disabled individual and to the therapeutic staff. In doing so, it dismissed the regular educational system from responsibility and from the need to address the problems of children with special needs. Indeed, there are special children who need more teaching time and more time for learning; but in principle, disabled children do not need instruction that is fundamentally different from the instruction provided to their normally developing peers. According to this approach, special education teachers are educators with special skills and not educators of special children. 2 The philosophical concept that considers the disabled individual as an integral part of society, led to two complementing models that aim to achieve de facto integration of the special individual in society: the behavioral model, which advocates the principle of normalization, and the humanistic-educational model. The behavioral model -the principle of normalization One of the social changes that derived from the struggles of human rights movements, is the integration model -based on the concept of normalization. This concept developed in the United States and has manifested itself in two areas: 1) Legislation aimed at equal rights, equal opportunities, and affirmative action that would enable "normal life" in the community. 2) Integration of students with special needs into the public education system, in order to prepare them for a normal life. The behavioral concept has received much criticism for adhering to the medical model, according to which the abnormality must be "cured" and made "normal". The behavioral model dictated systematic ways of working and clear stages of diagnosis, defining the "disease", implementing an intervention program that determines the environmental conditions and subsequent treatment, and examining the results in light of the criterion of health or normality versus illness or abnormality. Although the behavioral method was found to be effective and was widely used in educational and rehabilitation institutions, it entailed omitting the uniqueness of the individual. The goal of normalization was achieved, perhaps, but the price was the disabled people's isolation and alienation, despite living within the community. No wonder adults with disabilities began to make claims and requests to live meaningful, interesting, and independent lives and to be able to make their own decisions. 4 The humanistic-educational model The humanistic-educational model, focusing on the disabled individual and his rights, was developed as an alternative to the behavioral model. This model maintains a holistic approach towards the individual and the social, therapeutic, educational, and rehabilitation services provided to him. According to this model, true integration is a two-way activity of the individual and society, rather than a one-way activity of preparing the disabled person to be like everyone else. True integration means cultivating the ability of the disabled person to live a meaningful life of dignity with the inclusion of the disability, and at the same time preparing society to accept people with disabilities and handicaps as ordinary people and adapt the services it provides to their needs. The keyword in this model is respect for the individual. From normalization to inclusion and integration According to the integration model, which is based on the behavioral model and the humanistic-educational model, people with disabilities should live in conditions as similar as possible to those of ordinary people in all areas of life: residence, work, study, leisure, etc. To this end, they should be granted the same civil rights given to other citizens. At the same time, in a gradual process, the understanding began to take root that the integration model was not sufficient and thus the idea of inclusion began to form. The idea of inclusion is a perceptual change in the concept of "normalization" underlying the integration model. The change is manifested in the transition from an attempt to "change" a person with special needs and "normalize" him, to a desire to "include" them as they are and adapt society to them. The inclusion model shifted its weight to the humanistic model with an increasing demand that society should adapt itself and accommodate those with special needs. The concept of inclusion stemmed from scholars from Scandinavia and the US, who believed that for de-facto equality, it is not enough to adapt people with special needs to society, but also vice versa. 5 For that to happen, the public should be educated to accommodate people with special needs and should acknowledge that they are able to live in good conditions, no less than those of an average citizen. Without a profound social change of consciousness, all people with special needs will still be considered "different", will not receive basic human respect and dignity. Not because they are incapable, but because they do not meet the criteria set by the mainstream. The rights of people with disabilities should be enshrined in law, which will guarantee them equal rights. The weak point of the principle of normalization lies in the interpretation of the term "normal". There was a general misconception that normal means good and disabled means not-good. This perception defined how to address people with special needs; in educational settings, the staff believed that people with special needs should be "normalized" as much as possible. People with special needs would like to be accepted as they are. 6 In recent years, the principle of normalization has been redefined. This new definition corresponds to the ethos of a heterogeneous and democratic society, and it focuses on society's readiness to integrate all individuals. In order to accommodate people with disabilities, a two-level, comprehensive accessibility is needed: (a) the physical level: the public space must be adapted and made accessible to the physical limitations of people with special needs; (b) the perceptual level: there is a need to change basic attitudes towards people with special needs, consider them as human beings and accept them as they are and not as disabled people who should be corrected. Following the critique on the concept of normalization among the professional community, there was a tendency to replace the term 'normalization'/'integration' with the terms 'inclusion' and 'participation'. The term 'inclusion' expresses the basic legal right of equality. The two main laws in Israel that address integration are the Special Education Law and the Equal Rights for Persons with Disabilities Law. 7,8 The purpose of these laws was determined in the body of the law as follows: Section 2 of the Special Education Law provides as follows: 2. The goals of special education services are -(1) To promote and develop learning, competencies, and abilities of students with special needs and their physical, mental, emotional, social and behavioral functioning as well as to provide them with knowledge, life skills, and social skills; 7 Special Education Law, 1988. 8 Special Education Law, 1988 (2) To ensure the right of students with special needs for equal and active participation in society, in all areas of life, and to provide an appropriate response to their special needs in a way that will enable them to live in maximal independence, privacy, and dignity, while realizing their abilities; (3) To promote the integration of students with special needs in regular educational institutions. 9 The Equal Rights for Persons with Disabilities Law: (2) To protect the dignity of people with special needs and ensure their right for equal and active participation in society, in all areas of life, and to provide an appropriate response to their special needs in a way that will enable them to live in maximal independence, privacy, and dignity, while fully realizing their abilities. 10 Integration in Israel -historical review In 2002, the Israeli Special Education Law was extended and is now referred to as the Integration Law, which addresses the integration of children with special needs in regular education. Article 20B of the proposal states: "An integrated student is entitled, as part of his studies at a regular educational institution, to supplemented teaching and learning as well as to special services..." (section 20b). The amendment specifies the composition of the Integration Committee, whose role is to determine the eligibility of a student with special needs in a regular school and the need to tailor an educational program for each integrated student. The amendment clear-______________ 9 Special Education Law, 1988. 10 Equal Rights for Persons with Disabilities Law, 1998. ly states, for the first time, the necessity to integrate children with special needs into the regular education system, with the addition of special instruction and special services. In addition, the decision on eligibility for each child will be made at the school level according to the recommendation of the integration committee, in cooperation with the parents. That is, parental involvement and partnership are now enshrined in legislation and parents can appeal to the committee. The Special Education Law and its expansion in 2002 has a new chapter that defines educational integration as the desired outcome -giving preference to the regular educational system over special education; providing special education services and regular care within the regular framework; and extending parental participation in making decisions concerning their children, their participation in placement committees and disclosure of documents to the parents. The Ministry of Education has established three different frameworks for the integration of special education students: a special education school, a special education class in a regular school (an advancing class), and individual integration in a regular class and a regular school. In recent years, there has been growing public interest in integrating children with disabilities into the regular education system. This interest is reflected in the increasing involvement of organizations and associations, in discussions in the Knesset (Israeli parliament) committees, in the legal-legislative field, in petitions submitted to the courts, and in the establishment of a public committee (chaired by former judge Dalia Dorner) to examine the policy regarding of students with special needs. This committee examined the implementation of the integration section of the law and recommended various improvements: parental involvement and letting them choose the suitable framework for their child; preferring a flexible budgeting method -"the budget follows the child"; individual decision on the child's placement, according to his level of functioning; training and professional development for the teacherassistants; training teachers from the regular education track; proper equipment of special education settings and locating them near regular schools. A Brookdale Institute (2010) report revealed that graduates of the system who have been integrated into regular education, report a lack of social connections after school hours. The report indicated that the educational integration at schools does not enhance the social lives of students with special needs in the after-school hours, that is, integrated students have few after-school social experiences. Another finding was that all students in schools where children with disabilities were integrated did not receive adequate preparation. It was also found that the integrated students do not receive a life preparation program and do not have the skills needed to integrate into society. 11 These findings indicate that "it takes two to tango". That is, the inclusion target (Objective 12) of the Ministry of Education, which has been implemented since 2012, requires that the regular schools should be adept at accommodating students with special needs. This step is critical, as are the integration and life-skills programs for students with special needs. Consequently, those students do not enjoy an optimal social life and do not take an active part in the community. Integration and quality of life The concept quality of life represents an ideology and a sociopolitical strategy that has been more prevalent in the last two decades. This means that it is not enough to strive for the integration of the individual in a more normative framework, but that he or she must be guaranteed quality of life. 12 The term 'quality of life' pre- sents an alternative paradigm to the medical paradigm on which the special education system was based. The integration movement, which created an education reform, expanded the meaning of the term 'quality of life' and applied it to every student that is different from the norm in his surroundings, in terms of origin, socio-economic status, etc. According to this paradigm, the educational framework should tailor an individual program for each child and adolescent with disabilities, after finding out about the student's needs, preferences, and abilities. It will take into consideration his opinion and allow him to make choices and decisions. The program is supposed to take into account various aspects -social ones, independence, physical comfort, personal development, and psychological well-being. Contrary to the integration movement, which was based on the medical model, the inclusion movement, which is based on the social model, contends that disability is not a feature of the individual but a state of interaction between the individual and his environment and the assistance provided to him. That is, the manifestation of disability is a product of social definition because society decides how to evaluate people with disabilities and judge them. Supporters of the movement argue that children with disabilities should not be adapted to the framework, as implied by the integration model, but on the contrary -that the framework should be adapted to the children. For example, instead of providing the student with a sequence of special education framework, as suggested by the integration model, he should be given a series of services within a regular class. The services will be ranked according to the scope of the class and according to the degree of intensity of the adjustments required. 13 This view stems from the movement's fierce belief that equality is a moral value that should be protected unconditionally. 14 Inclusion Inclusion is a concept from the field of psychology that describes the ability to accept feelings and difficulties of another person as they are, without rejecting or denying them, or transferring them to others in an unadapted manner. Inclusion is associated with the ability to observe difficult emotions and situations or interpret them in a way that will enable accepting and assimilating them. Inclusive schools were first established in Israel in 2017. Those first four schools host students with special needs and "regular" students. Dozens of additional inclusive schools are about to open in 2021. This reflects the desire of the educational system in Israel to prioritize the inclusion program over the integrative program. Inclusive schools in Israel An inclusive school is a school built entirely around the inclusion of children with special needs alongside "ordinary" children. Adi Altschuler, a social entrepreneur and the founder of Wings of Kremboa youth movement for children with and without special needs, initiated the establishment of inclusive schools so that the inclusion and participation revolution will take place in formal education as well. In inclusive schools, every third student has special needs. The school is physically and pedagogically adapted for this purpose. The curriculum provides educational quality on the one hand, and inclusive and integrative thinking on the other. Teachers are substantially supported by special education teachers and integration assistants. All staff members are trained according to the inclusion model. Educational inclusion Educational inclusion is based upon several social and educational approaches. Education is part of society and therefore must apply social norms and advocate moral values. Educational inclu-sion stresses the acceptance of the individual, regardless of who he or she is, by providing the setting and opportunity to express their needs and receive the optimal conditions to realize their abilities, even if they are different from those of their peers. The principle underlying the inclusion policy is the aspiration of the education system to create meaningful learning that has involvement, belonging, interest, enthusiasm, emotional and mental connection, and constant growth for all participants. Israel is a multicultural and diverse society. Therefore, there is a need to apply concepts of inclusion and diversity in various services and settings. The Israeli educational system consists of students with different characteristics and diverse needs. Each student has strengths, as well as skills and competencies that require support and enhancement. The different educational frameworks aim to accommodate each student's needs, as part of the institution's raison d'être. 15 The Ministry of Education has set inclusion as a pivotal goal in its working plans since 2012, recognizing that openness to learning about and getting acquainted with "others", will advance us to be the type of society we aspire to. An inclusive school provides its students with the optimal conditions for their development, advancement, and mental well-being. It is a place that recognizes diversity, flexibility, and creative thinking. It works to create a sense of belonging, protection, and meaning, and maintains a meaningful dialogue with all its members -students, teachers and other staff members, parents, and the surrounding community. In recent years, school inclusion has become a priority in the national agenda. Many teachers from regular education receive special education training, and regular schools are transforming into inclusive schools. An inclusive school enables children with mild, moderate, and severe disabilities to integrate into regular settings near their homes and acquire the same education as their peers, only adapted to their individual needs. The school inclusion pro-______________ 15 R. Slee (2011). The irregular school: Exclusion, schooling and inclusive education. Oxon: Routledge. gram has a vital role in educating future generations to be tolerant and accepting of all others. In the educational system in Israel, as in other countries, there are students with diverse abilities and different needs. The inclusion and participation of all students is a top priority. An inclusive society recognizes the added value of diversity and its advantages. People are different from one another -each has abilities, needs, wishes, and desires, and all individuals can contribute to shaping our society. The commitment to the inclusion and integration of students is an important challenge for the teaching staff. This commitment means that the staff members maintain the perception that every student is entitled to study within his immediate community and to experience shared living throughout the day, in educational institutions, in after-school activities, and in extra-educational frameworks. Moreover, it should be acknowledged that different responses to different students benefit the entire class and promote it as a whole. Inclusion in educational settings relates to four central "action areas": pedagogical inclusion, emotional-social inclusion, organizational inclusion, and environmental inclusion. This division into four areas is not dichotomous, but it allows for an in-depth, holistic observation of the educational institution as one organism with a variety of study trends, treatment options and tailored teaching. Inclusion and participation at schools are reflected in the provision of multiple responses to a variety of needs, in those four "action areas". This series of responses allows each student to progress and realize his potential, find interest in things, expand his social skills, and enrich his emotional world. The ability of the teaching staff to address the important moral and professional challenges they face is a key goal for the educational system. The inclusion and participation of students with special needs strengthen the ability of the teaching staff to address those important moral and professional challenges. These challenges provide an opportunity for enriching professional and emotional experiences. Views of student teachers and teachers toward inclusion of students with special needs One of the factors influencing teachers' attitudes is knowledge about children with special needs and their integration in regular classes. This knowledge is acquired during both teaching training and in service. Studies confirm the assumption that training in special education, during those two professional periods, is necessary in order to reduce objections to integration. Enriching teachers' knowledge about integration and ways to meet the needs of integrated students may reduce negative attitudes toward integration. 16 Teachers who reported a high level of special education training, or experience in teaching students with special needs, held more positive views toward integration. 17 Rothenberg and Reiter (2002) conducted in a study in which 92 Israeli education students from non special-education study programs participated. 18 The study group included 59 students who took an introductory course in special education; the control group included 33 students who did not take that course. The study addressed the question of whether there is a connection between taking introductory courses in special education and more positive attitudes towards children with special needs and their integration in regular classes. The syllabi in those courses were based on pedagogical and didactic principles, mainly education to equality, justice, and fairness towards all groups and to all individuals. The study showed that students from the study group changed their attitudes towards children with special needs and their integration in the regular educational system. The change was apparent in all components of one's views: emotional, behavioral, and cognitive. ______________ 17 K. Parasuram (2006 These studies indicate a positive relationship between learning about disabilities and preparing to work with disabled students, and positive attitudes of teachers towards inclusion. Studies also indicate that positive attitudes towards inclusion lead to optimal integration. 19 Therefore, in view of the 11 th amendment to the Special Education Law (1988), which advocates inclusion and participation of every student with special needs in Israel, we are committed to preparing the educational staff early on in their academic training, in order to include and integrate all special education students within regular education settings. Recommendations and a training model Studies indicate that the status of special education teachers, subject teachers, and educators has been undergoing change, in the trend toward inclusive educational system. Teachers do not always know what their status is, and the school organizational structure has changed. Teachers are required to work collaboratively and synthesize the information collected about each student into personalized programs aimed at advancing students with special needs. As a pedagogical instructor and a college lecturer, I meet student teachers with special education background as well as subject student teachers; I also meet teachers in whose classes there are students with special needs. From conversations I've had with them and the results of the studies detailed above, I see the need to prepare those future teachers already in their academic training, familiarizing the student teachers with the type of tasks they'll need to perform in inclusive schools. Following the inclusion goals set by the Ministry of Education and the planned follow-up goals in the State of Israel, I recommend that inclusion programs be part of the academic studies and prepare student teachers for educational inclusion and inclusive teamwork. ______________ Since student teachers who do not specialize in special education will be required, according to the school inclusion program, to take part in inclusion programs of students with special needs, I propose that in the first academic year all student teachers in Israel should be introduced to inclusive education, to facilitate their active participation in the schools' inclusion programs. Teaching curricula should include courses on social inclusion. Graduates of such courses will acquire tools to instill in students the values of social and emotional inclusion of students with special needs. The model is based on the fact that the field of inclusion is an integral part of the degree in education. Inclusion-related courses will encompass four semester courses in each academic year and on the fourth (practical training) year, student teachers will be required to gain practical experience in an inclusive class or school. The training model for the three academic years contains courses and workshops in the following topics: 1. Inclusive pedagogy and optimal differential learning. 2. Inclusive values and social and emotional integration. 3. Exposure to the various disabilities and their characterization. 4. Teamwork and collaboration. 5. Principles of inclusive schools and their inter-organizational working processes. Practical experience must include differential learning and coping with social and emotional differences. The principle of inclusion must be an integral part of the curricula at teachers' colleges and universities if we want all schools to be inclusive. It should be woven -both theoretically and practicallyinto the academic studies, to prevent a situation whereby a teacher encounters the concept of inclusion and is trained for it only after receiving certification. I sincerely hope that the inclusion model that is gaining momentum throughout the world and in Israel will be a part of our outlook and that the inclusion of students with special needs at an early age will contribute to community building and create an inclusive generation of people who consider everyone equal -a society with tolerance to diversity and accepting of others completely.
2021-03-02T14:16:31.995Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "9836475f8f748635a39656b161320faac074d7dd", "oa_license": null, "oa_url": "https://pressto.amu.edu.pl/index.php/ikps/article/download/27052/24769", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e2afbcdf84b424e6a533037feec6cf61659e388", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
204741240
pes2o/s2orc
v3-fos-license
Pulse-Wave-Pattern Classification with a Convolutional Neural Network Owing to the diversity of pulse-wave morphology, pulse-based diagnosis is difficult, especially pulse-wave-pattern classification (PWPC). A powerful method for PWPC is a convolutional neural network (CNN). It outperforms conventional methods in pattern classification due to extracting informative abstraction and features. For previous PWPC criteria, the relationship between pulse and disease types is not clear. In order to improve the clinical practicability, there is a need for a CNN model to find the one-to-one correspondence between pulse pattern and disease categories. In this study, five cardiovascular diseases (CVD) and complications were extracted from medical records as classification criteria to build pulse data set 1. Four physiological parameters closely related to the selected diseases were also extracted as classification criteria to build data set 2. An optimized CNN model with stronger feature extraction capability for pulse signals was proposed, which achieved PWPC with 95% accuracy in data set 1 and 89% accuracy in data set 2. It demonstrated that pulse waves are the result of multiple physiological parameters. There are limitations when using a single physiological parameter to characterise the overall pulse pattern. The proposed CNN model can achieve high accuracy of PWPC while using CVD and complication categories as classification criteria. short term memory (LSTM) 12 , as a variant of RNN, can effectively prevent the occurrence of gradient vanishing from processing time series signals.In recent years, remarkable achievements have been made in the field of pattern classification via the use of convolutional neural networks (CNNs) as deep learning structures [13][14][15][16] .CNNs provide an end-to-end learning model.The trained CNNs by the gradient descent method can learn the characteristics of input data and further complete the pattern classification.CNNs have strong ability of feature learning and pattern classification.The main reason is that the features of the lower layers are derived from the partial information and convolution kernel with sharing weights from the upper layer.CNNs have been applied in the classification of human physiological signal patterns.Based on a 34-layer CNN, Rajpurkar et al. classified the electrocardiogram (ECG) signals into 14 types 17 .Moreover, Rubin et al. performed heart-sound recordings based on deep CNN and Mel-frequency cepstral coefficients 18 .These studies used CNN to achieve pattern classification of relevant physiological signals and achieved higher accuracy than the diagnostic results of experienced physicians.Furthermore, Hu et al. used CNN to divide pulse waves into two types: health and subhealth 19 .In the present study, in view of the large amount of pathological and physiological information contained in pulse signals, we collected the required data under the guidance of the doctor and established two data sets based on either CVD/ complication categories or physiological parameters.We proposed an optimised CNN model for PWPC based on these two data sets.The purpose of this study was to identify a practical and efficient classification criterion for PWPC based on CNN, which contributed to non-invasive, practical and effective diagnosis of CVDs and related complications. Results The average pulse waves of each pattern in the two data sets are shown in Fig. 2. We showed the learning curves of data set 1 and data set 2 respectively to evaluate their PWPC performance on the proposed CNN model, as shown in Fig. 3.For cost-value curve, the decline rate of data set 1 was significantly higher than that of data set 2. For training error and test error, the minimum value of data set 1 was smaller than that of data set 2. Especially test error, data set 1 (When epoch was 90, the minimum test error was 0.08.Epoch was the number of iterations in CNN pattern classification) was much smaller than data set 2 (When epoch was 100, the minimum test error was 0.34).With the same proposed CNN, the six pulse patterns in data set 1 showed higher calculation efficiency and feature expression ability than those five patterns in data set 2. Figure 1. According to previous studies' classification criteria, we show five pulse waves that exhibit a taut pulse pattern, which involves a pulse with a high second peak (local time shifting), as follows: (a) typical taut pulse, (b) taut pulse with high tidal wave and (c) taut pulse with tidal wave merged with percussion wave 9 .With the help of medical doctors, (d) and (e) were extracted from our database.Although (a-e) all feature a taut pulse pattern, there are still differences in some local waveform characteristics.In addition, the subject of (d) suffered from hyperlipidaemia, while the subject of (e) suffered from atherosclerosis.This shows that, under the previous classification criteria, a single pulse pattern might correspond to many disease categories. Table 1 shows the overall values of the evaluation parameters in the two data sets.The accuracy and other evaluation parameters of PWPC in data set 1 (overall accuracy = 0.95) were higher than those in data set 2 (overall accuracy = 0.89).Tables 2 and 3 show the details for each pattern in the two data sets separately.Pulse-wave patterns representing healthy subjects (H1 and H2) could be identified with high precision (precision H1 = 1, recall H1 = 0.99; precision H2 = 0.97, recall H2 = 0.97).The HCA, as the pulse pattern of complications, had the lowest classification rate in data set 1 (precision HCA = 0.89, recall HCA = 0.91).In addition, the classification performance of other pulse patterns in data set 1 was higher than that in data set 2. To further assess the PWPC result of the proposed CNN model, the two data sets were put into different neural networks models for PWPC.Table 4 shows the accuracy of PWPC with those different models.It details network methods, classification criteria, number of subjects, and the accuracy.It demonstrated that compared with other neural networks or other CNN structures, our proposed CNN model achieved higher accuracy in PWPC under the new classification criteria, which also meant stronger feature extraction ability for pulse signals. To further analyse the causes of errors in pattern classification, we determined the confusion matrix of the two data sets, as shown in Fig. 4. The cause of errors in data set 1 was mainly the erroneous classification of the four pulse patterns of Hn, At, HCA and Td.In data set 2, with the exception of the control group (H2), the remaining four pulse patterns (BP, CAVI, baPWV and BV) were found to interfere with each other and have higher error rates. Discussion In this study, CVD and associated complications as well as related physiological parameters were extracted, which were used as classification criteria.According to the new classification criteria, we screened the subjects' pulse waves and created data set 1 and data set 2, respectively.An optimised CNN model was proposed for PWPC.It achieved the classification of six pulse patterns in data set 1 with an accuracy of 95% and the classification of six pulse patterns in data set 2 with an accuracy of 89%.The main contributions of this study are as follows: 1. Two pulse wave data sets were created, which contained a large amount of physiological and pathological information of subjects.2. New classification criteria and optimized CNN model were proposed, which achieves higher accuracy than previous studies 7,8,[19][20][21] . This study demonstrates that CVD and complications are practical and efficient classification criteria, enabling the optimised CNN model to achieve high accuracy for PWPC. We observed that the classification errors in data set 1 were mainly due to the erroneous classification of the Hn, At and HCA patterns.This was due to the simultaneous occurrence of hypertension and atherosclerosis on behalf of HCA.There must be some similar pulse characteristics between HCA and the other two diseases, which indicates that, in order to ensure that the characteristics of the different pulse patterns are typical, the selected data specimen must exclude the effect of complications at the same time.In addition, in data set 1, Td was also partially misclassified as Hn (n = 1), At (n = 3) and HCA (n = 1).Previous studies showed that type 2 diabetes could increase the risk and mortality of CVD, and they had similarities in the damage to the cardiovascular system [22][23][24] . Thus, there might have been similar pulse waveform characteristics between Td and Hn, At, HCA patterns, which led to classification errors.In data set 2, four pulse patterns (BP, CAVI, baPWV and BV) were found to interfere with each other in pattern classification.Previous studies showed that the effect of a single physiological parameter on pulse waveform was mainly reflected in the change of some local characteristics [25][26][27] .The pulse waveform characteristics with the same value of one specific physiological parameter would change as a result of the differences of other physiological parameters, as shown in Fig. 5.It may have led to the errors of pattern classification in data set 2. Our study showed that the pulse-wave was the result of multiple physiological parameters.There are clearly limitations associated with using a single physiological parameter in characterising the overall pulse pattern.Disease was the result of multiple physiological parameters, which might explain the higher classification accuracy in data set 1. This study had several limitations.The most important one was the relatively limited number of subjects.Limited by the number of subjects, the effects of some physiological information such as age, height and weight on pulse waveform were ignored, which inevitably led to errors in pattern classification 1 .However, in our study, the number of pulse waves in each pulse pattern was several times that in some previous studies 21,28 .To some extent, the findings indicated that each of our patterns could represent the typical pulse characteristics.In addition, this study focused on the classification criteria of pulse patterns.For this purpose, we used the same CNN model to classify two data sets.Regarding the low classification rate of data set 2, we did not explore whether it could be improved by optimising the architecture of the CNN model. Conclusions In this study, we established pulse wave data set 1 and data set 2 based on the classification criteria: CVD categories and related physiological parameters.CNN was used to extract features from two data sets and to achieve PWPC with high accuracy.The main contribution of this study is to propose the new classification criteria for PWPC and construct a matching CNN model.The optimized CNN model achieved PWPC with 95% accuracy in data set 1 and 89% accuracy in data set 2. This study demonstrated that pulse waves are the result of multiple physiological parameters, so there are limitations when using a single physiological parameter to characterize the overall pulse pattern.The proposed CNN model can achieve high accuracy PWPC while using CVD and complication categories as classification criteria, which contributes to non-invasive, practical and effective diagnosis of CVD and associated complications. Method Data collection. The original pulse wave data were from the "Study on Evaluation Method of Cardiovascular System Based on Non-invasive Detection of Blood Pressure and Pulse-Wave of Limbs 29 ", which recruited 412 subjects and determined their physiological parameters and more than 12,000 cycles of pulse waves.The pulse and blood pressure signal measuring device was Fukuda VS-1500A.In addition, the subjects' brachial ankle pulsewave velocity (baPWV) and blood viscosity were collected.All subjects were registered at Beijing University of Technology Hospital, and information on their diseases was collected through the subjects' medical records. The study with its experimental protocols and relevant details was approved by the Institutional Ethics Committee of Beijing University of Technology and Tohoku University.All experiments were performed in accordance with relevant guidelines and regulations.We explained the content of the study to the subjects in detail, and on this basis, the subjects signed the informed consent form. Pulse waveform denoising and normalisation. In this study, we collected the pulse signals from the wrist of the subjects.The denoising and normalization of pulse signals were processed with the same method as the previous studies 30 .Firstly, the noise was removed with wavelet transform decomposition method 31 .Then, in order to prevent the distortion of pulse signals, according to Nyquist theorem and actual sampling frequency 8,19 , the sampling points of single cycle of pulse wave were set at 200.Because the focus of this study was the change of pulse wave model, the amplitude of pulse wave was normalized to 0-200 in each cycle. Data sets. Previous studies classified pulses into patterns based on the TCPD theory [7][8][9][10]32 . Howver, as mentioned previously, under this classification criterion, one pulse pattern may correspond to a variety of disease categories.Thus, in this study, based on subjects' clinical data, we directly selected five diseases as new classification criteria: hypertension, atherosclerosis, hyperlipidaemia, type 2 diabetes and hypertension complicated by atherosclerosis (HCA).Type 2 diabetes, as one of the common complications of CVD 33 , and HCA were used to study the effects of CVD complications on pulse waves.To ensure the typical characteristics of each pulse pattern, the pulse signals from subjects who only suffered from one of the five diseases and healthy subjects (a total of six types) were used as new pulse patterns to build data set 1, as shown in Fig. 6. We simultaneously selected four physiological parameters closely related to the selected diseases as classification criteria: blood pressure, which can be used as an indicator for assessing hypertension 34 ; cardio-ankle vascular index (CAVI), which is one of the indicators for assessing atherosclerosis 35 ; and brachial ankle pulse-wave velocity (baPWV), which can be used as an indicator for evaluating cardiovascular function in type 2 diabetics 36 ; For patients with hyperlipidaemia, an increase of blood lipids often occurs simultaneously with increased blood viscosity 37 .Based on the subjects in data set 2 and the medical reference range, we determined the range of each physiological parameter.The pulse waves of subjects in whom only one of the four parameters was beyond the range were selected.The pulse waves of subjects whose four parameters were all within the range were also selected as a healthy control group.Then the five types of pulse pattern were used to build data set 2, as shown in Fig. 7. For the processing of pulse image, this study used the same method as previous studies 30 .We extracted the pulse cycles from the selected subjects.To avoid data duplication affecting the accuracy of CNN prediction, all Figure 6.The process of screening the subjects in data set 1. a,b,c,d Screening criteria: The number of subjects for a selected disease should be more than 20.The disease or complications must be of the five types selected in this study.There is no serious abnormality in pulse waves caused by noise or incorrect data collection, among others.We show all cases and numbers of excluded subjects in c : type 2 diabetes complicated by hypertension (n = 4), type 2 diabetes complicated by atherosclerosis (n = 3), type 2 diabetes complicated by heart failure (n = 5) and diabetic foot disease (n = 8).Based on the screening criteria, we excluded these cases. pulse waves in the two data sets were taken from different cycles.The total cycles of each pulse pattern were 210, which were divided into training set and test set, as shown in Table 5.As mentioned above, the number of sampling points in a single cycle of normalized pulse wave was 200, and the amplitude was 0-200.Therefore, the pulse wave signals were processed as input PNG pulse images with a size of 200 × 200 pixels. The proposed CNN. In this study, an optimised CNN model (10-layer) was proposed based on DCNN 19 and LeNet-5 38 , which had been applied for PWPC, as shown in Fig. 8. Compared with the previous networks, we added dropout 39 between the third max pooling layer and the fully connected layer.When CVDs were used as classification criteria, each pulse pattern changed from local waveform difference under previous criteria to overall pulse waveform difference.This led to too many characteristic parameters of pulse wave extracted by CNN, which further led to over-fitting in the training process.Pre-experimental results showed that dropout layer could help reduce test errors and avoid over-fitting phenomenon in the training process (see Supplementary Fig. S1).In addition, the final Softmax activation produced a distribution over the output probability classes for each pulse pattern of two data sets.Besides the layers mentioned above, the CNN also included three convolution layers, three max pooling layers and two fully connected layers.The number of convolution layers was determined by the number of pulse wave characteristic.The insufficient layers led to the inadequate feature extraction ability of CNN, while the excessive layers increased the time cost and calculation cost.In this study, we determined the number of layers by pre-experimental results.The convolutional layers were used to extract complex parameters Table 5.The details of PWPC data sets. of the input feature maps by convolution with kernels.The max pooling layers achieved the down-sampling of the input signals by choosing the maximum value of the area as the value of the pooled area.The max pooling layers could retain the main features of the input signals while reducing the parameters and computation, which helped to avoid the occurrence of over-fitting and improve the generalization ability of the CNN model 40 .The final two fully connected layers combined all of the upper feature maps into a one-dimensional array, which was used to classify the output.In this study, we used the Adam optimiser, which is straightforward to implement, with high calculation efficiency and low memory requirements 41 .In accordance with previous studies and a preliminary experiment, the parameters of the Adam optimiser were as follows: learning rate = 0.001, ϵ = 0.001, ρ1 = 0.9, ρ2 = 0.999 and δ = 1E Evaluation.The proposed CNN was evaluated with the average of the operating parameters calculated over time.The overall accuracy, precision, recall and F-measure were determined to assess the classification performance of the network, as presented in the results section.To further evaluate the classification performance of each pulse pattern, we also present the evaluation parameters of each pattern and the confusion matrices for the two test sets.The evaluation parameters were calculated using the true positive (TP), true negative (TN), false positive (FP) and false negative (FN). In order to further evaluate the PWPC capability of the CNN model proposed in this study, we selected three different neural networks (LetNet 38 , AlexNet 14 , VGG-Net 15 ).Data set 1 and data set 2 were used as inputs of these three networks respectively.The PWPC results were compared with the CNN model proposed in this study. Figure 3 . Figure 3. Learning curve in data set 1 (a) and data set 2 (b). Figure 4 . Figure 4.The confusion matrices of data set 1 (a) and data set 2 (b).The confusion matrix is an intuitive method for evaluating the results of pattern classification CNN models.The real categories (rows) and predicted categories (columns) of the classification results can be read directly.For example, in matrix (a), there were 70 (65 + 5) pulse waves which really belonged to the Hn pattern (the second row), while the CNN model predicted 69 (65 + 1 + 1 + 2) pulse waves in the Hn pattern (the second column). Figure 5 . Figure 5.The pulse-wave form baPWV pulse pattern with the different baPWV and different CAVI.Pulse waves from six subjects were selected. Figure 7 . Figure 7.The screening process of the subjects in data set 2. a,b,c,d Screening criteria:The number of subjects for selected parameters should be more than 20.Subjects' other parameters, such as stroke output and cardiac output, must be within the normal range of medical reference.There is no serious abnormality in pulse-wave caused by noise or incorrect data collection, among others.For a,b,c,d , most of the excluded subjects had three or even four parameter values outside of the range.To ensure that the characteristics of each pulse pattern were typical, we excluded these subjects. − 8 . During the optimisation process, we saved the best model configuration as evaluated on the test set.The CNN was trained by neural_network_console (Sony Company) on an Intel(R) HD Graphics 630 with batch size 64 for 100 epochs. Figure 8 . Figure 8.An illustration of the CNN architecture.The size settings of convolution kernels and feature maps are shown in the figure. Table 1 . PWPC evaluation of per pulse patterns in two data sets. Table 2 . PWPC evaluation of per pulse patterns in data set 1. Table 3 . PWPC evaluation of per pulse patterns in data set 2. Table 4 . PWPC accuracy of different methods.
2019-10-17T14:37:15.320Z
2019-10-17T00:00:00.000
{ "year": 2019, "sha1": "832a6d52735795666b346d0ec9a368b07fc83f10", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-51334-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a1eec913ac35468179c1c68752e11f454243344", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
119213800
pes2o/s2orc
v3-fos-license
Universality of the topological susceptibility in the SU(3) gauge theory The definition and computation of the topological susceptibility in non-abelian gauge theories is complicated by the presence of non-integrable short-distance singularities. Recently, alternative representations of the susceptibility were discovered, which are singularity-free and do not require renormalization. Such an expression is here studied quantitatively, using the lattice formulation of the SU(3) gauge theory and numerical simulations. The results confirm the expected scaling of the susceptibility with respect to the lattice spacing and they also agree, within errors, with computations of the susceptibility based on the use of a chiral lattice Dirac operator. Introduction In QCD and other non-abelian gauge theories, the discussion of the effects of the topological properties of the classical field space tends to be conceptually non-trivial, because the gauge field integrated over in the functional integral is, with probability 1, nowhere continuous. The topological susceptibility, for example, is only formally given by the two-point function of the topological density at zero momentum, unless a prescription is supplied of how exactly the non-integrable short-distance singularity of the two-point function is to be treated. In lattice gauge theory, the problem was reexamined some time ago [1][2][3] starting from a formulation of lattice QCD which preserves chiral symmetry. An important result of this work was that the topological susceptibility can be written as a ratio of expectation values of other observables which remains well-defined in the continuum limit. A particular choice of regularization is then not required, i.e. the new formula provides a universal definition of the susceptibility. Moreover, this definition is such that the anomalous chiral Ward identities are fully respected. The aim in the present paper is to complement these theoretical developments by demonstrating the suitability of the universal definition for the computation of the topological susceptibility in lattice gauge theory. In this study, the pure SU(3) gauge theory is considered and a recently proposed version [4] of the universal formula is used (see sect. 2). As far as the feasibility of the calculation is concerned, the results are however expected to be directly relevant for QCD too. Singularity-free expressions for the topological susceptibility The formula for the susceptibility obtained in [4] is not very complicated, but some preparation is required to be able to write it down. From the beginning, the theory is considered on a finite hypercubic lattice with spacing a, volume V and periodic boundary conditions. While some particular choices have to be made along the way, these details are expected to be irrelevant in the continuum limit in view of the fact that the expression is renormalized and free of short-distance singularities. Spectral-projector formula The construction starts by adding a multiplet of valence quarks with bare mass m 0 to the theory. On the lattice, the added fields are taken to be of the Wilson type [5] and the associated massive Dirac operator D m is assumed to include the Pauli term required for O(a) improvement [6,7] (the relevant improvement and renormalization constants are collected in appendix A). The hermitian operator D m † D m has a complete set of orthonormal eigenmodes with non-negative eigenvalues α. On average there are only few eigenvalues below some threshold α th proportional to the square of the valence-quark mass (see fig. 1). Above the threshold, the spectrum has an approximately constant density with a slight downward trend in the range considered in the figure. Such a trend is absent in two-flavour QCD [4], but is qualitatively in line with the behaviour of the spectral density at next-to-leading order of quenched chiral perturbation theory [8]. The topological susceptibility is now given by [4] where P M denotes the orthogonal projector to the subspace spanned by the eigenmodes of D m † D m with eigenvalues α < M 2 . It is taken for granted in this formula that M 2 is above the effective threshold α th of the spectrum and that the renormalized valence-quark mass m R as well as the renormalized value M R of M are held fixed when the lattice spacing is taken to zero (cf. appendix A). Alternative expressions Equation (2.1) derives from a study of the renormalization and symmetry properties of the n-point correlation functions of the scalar and pseudo-scalar densities of the valence quarks. There exist various representations of the topological susceptibility of a similar kind, all having the same continuum limit. In particular, for any function R M of D m † D m which is equal to unity in the vicinity of the spectral threshold and rapidly decaying above M 2 . The shape of the function can otherwise be chosen arbitrarily and only affects the size of the O(a 2 ) corrections. In the numerical work reported later, R M is set to the rational approximation of the projector P M previously used in [4] for the computation of the mode number in For the reader's convenience, the function is given explicitly in appendix B. Numerical studies The expression on the right of eq. (2.2) is a ratio of well-defined expectation values that can in principle be computed through numerical simulations. In practice, the traces Tr{. . .} can normally not be evaluated exactly, but as explained in subsect. 3.3, they can be estimated stochastically with a moderate computational effort and without compromising the correctness of the final results. Simulation parameters The studies reported in this paper are based on simulations of the lattice theory at three values of the inverse bare gauge coupling β = 6/g 2 0 (see table 1). A well-known deficit of all currently available simulation algorithms for non-abelian gauge theories (including the link-update algorithms used here) is the fact that the integrated autocorrelation times of quantities related to the topological charge are rapidly growing when the lattice spacing decreases [9,10]. In order to guarantee the statistical independence of the N cnfg gauge-field configurations used for the "measurement" of the topological susceptibility, the distance in simulation time of subsequent configurations was required to be at least 10 times larger than the relevant autocorrelation times. Physical units are defined through the Sommer reference scale r 0 = 0.5 fm [11]. In the range of the gauge coupling covered here, the conversion factor r 0 /a from lattice to physical units was accurately determined by Guagnelli et al. [12]. The spacings of the three lattices thus decrease from roughly 0.1 to 0.05 fm by factors of 1/ √ 2, while the lattice sizes in physical units are approximately constant. Spectral projector parameters As already mentioned, the operator R M is taken to be a rational approximation to the projector P M . It thus depends on the valence-quark mass, the mass M and the parameters n and ǫ that determine the accuracy of the approximation (cf. appendix B). A reasonable choice of the latter, previously made ref. [4], is n = 32 and ǫ = 0.01. In the range of eigenvalues of D m † D m below 0.85 × M 2 , the approximation error is then smaller than 2.2 × 10 −4 , which is by far small enough to guarantee the absence of significant systematic effects in eq. (2.2). Moreover, the contribution of the high modes is safely suppressed. The valence-quark mass and the mass parameter M were adjusted such that their renormalized values in the MS scheme at normalization scale µ = 2 GeV are about 25 and 100 MeV, respectively. Using the information collected in appendix A, the corresponding values of the bare mass parameters, κ = (8 + 2am 0 ) −1 and aM , can be worked out and are listed in table 1. On the lattices considered, there are then 57 − 70 eigenmodes of D m † D m with eigenvalues below M 2 and an average density of roughly 1 such mode per fm 4 . As already emphasized, the calculated values of the topological susceptibility are not expected to strongly depend on all these details and should in any case always extrapolate to the same value in the continuum limit. The lattice effects are unlikely to be small, however, if aM is not much smaller than 1 or if the expectation values on the right of eq. (2.2) would be dominated by the modes up to and slightly above the spectral threshold, where the effects are kinematically enhanced. Both of these unfavourable situations are avoided by the above choice of the mass parameters. Random-field representation In lattice QCD, random field representations were introduced many years ago [13] and are now widely used. The application of the method in the present context requires a set η 1 , . . . , η N of N pseudo-fermion fields to be added to the theory with action where the bracket (η, ζ) denotes the obvious scalar product of such fields. For every gauge field configuration, these fields are generated randomly so that one obtains a representative ensemble of fields for the complete theory. In the rest of this section, expectation values are always taken with respect to both the gauge field and pseudofermion fields. The stochastic observables may now be introduced and a moment of thought then shows that the expectation values on the right of eq. (2.2) are given by The topological susceptibility can thus be computed by calculating the expectation values of A, B and C 2 . For a given gauge-field configuration, the evaluation of these observables requires the fields R M η k , R 2 M η k and R M γ 5 R M η k to be computed, i.e. the total numerical effort per configuration is roughly equivalent the one required for 3N applications of the operator R M to a given pseudo-fermion field. From this point of view, small values of N are favoured, but a good choice of N must also take into account the fact that the variance of the stochastic observables decreases with N . Some experimenting then shows that setting N = 6 is a reasonable compromise at the specified values of the mass parameters. Since R M is a rational function of D m † D m of degree [2n + 1, 2n + 1], the measurement of the stochastic observables requires the (twisted-mass) Dirac equation to be solved for altogether 2340 source fields. The computational load thus tends to be heavy, but the problem is well suited for the application of highly efficient solver techniques such as local (26) deflation [14] (see ref. [15] for a recent review of the subject). In particular, when these are used, the effort scales linearly with the lattice size and is nearly independent of the values of the mass parameters. Simulation results The simulation data discussed in the following paragraphs are summarized in table 2. In all cases, the statistical errors were estimated using the jackknife method and were combined in quadrature with the quoted scale errors (where appropriate). (a) Mode number. The average number ν of eigenmodes of D m † D m with eigenvalues below M 2 is an extensive quantity and is therefore normalized by the lattice volume in table 2. At the specified bare masses, the renormalized masses m R and M R are practically equal to 25 and 100 MeV, respectively, on all three lattices considered. In view of the renormalization properties of the mode number [4], the calculated values of ν/V are thus expected to be the same up to O(a 2 ) effects. Within errors, the figures listed in the second column of table 2 in fact coincide with one another. Note that the quoted errors do not take into account the fact that the mass renormalization factors and thus the renormalized values of the masses are only known up to an error of about 2% (see appendix A). Once this error is included in the analysis, one can still conclude, however, that the simulation results confirm the expected scaling of the mode number to the continuum limit at a level of precision of 3% or so. (b) Topological susceptibility. The values of the susceptibility calculated along the lines of the present paper are listed in the third column of table 2. Again one observes no statistically significant dependence on the lattice spacing. Finite-volume effects are, incidentally, known to be negligible with respect to the statistical errors on all three lattices [16]. Fits of the data by a constant or a linear function in a 2 yield consistent results in the continuum limit. Since the slope in a 2 turns out to vanish within errors, the (more accurate) number (c) Charge sectors & the Wilson flow. An understanding of how exactly the topological charge sectors emerge in the continuum limit has recently been achieved using the Wilson flow [18]. The definition of the topological susceptibility suggested by the sector division is geometrically appealing and computationally far less demanding than the spectral-projector formula (2.2). Presumably the two definitions agree in the continuum limit, but there is currently no solid theoretical argument that would show this to be the case. The values of the susceptibility computed using the Wilson flow are listed in the fourth column of table 2 (see ref. [18] for the details of the calculation). While they appear to be systematically lower than the ones obtained using the spectral-projector formula, the differences are statistically insignificant on each lattice. Moreover, there could be lattice effects of size up to the level of the statistical errors. Since the same ensemble of representative gauge-field configurations was used in the two cases, the quoted errors are correlated to some extent (not completely so in view of the use of random fields). The ratio listed in the last column of table 2 is therefore obtained with slightly better precision than the susceptibilities. Fits of the ratio by a constant and linear function in a 2 are both possible, the values in the continuum limit being 1.048(14) and 1.036(31), respectively. The spectral-projector and the Wilson-flow definition of the susceptibility thus coincide to a precision of a few percent. While there is some tension in the data, there is no clear evidence for the definitions to be different at this level of accuracy. Conclusions The numerical studies reported in this paper confirm the universality of the spectralprojector formula (2.2) for the topological susceptibility. In particular, no statistically significant lattice-spacing effects were observed and the calculated values agree † In ref. [16], a different convention for the conversion from lattice to physical units was used and the value for the susceptibility quoted there is therefore slightly different from the one printed here. with the result obtained by Del Debbio, Giusti and Pica [16], where a chiral lattice Dirac operator was used. The numerical effort required for the stochastic evaluation of the spectral-projector formula increases proportionally to the number V /a 4 of lattice points, but is a flat function of all other parameters. On large lattices, computations of the susceptibility along these lines thus tend to be more feasible than those based on a chiral lattice Dirac operator (which scale roughly like V 2 ). Even less computer time is required if the susceptibility is defined via the Wilson flow, but a formal proof for this definition to be in the same universality class as the spectral-projector formula is still missing. With respect to the pure gauge theory, the application of the spectral-projector formula in QCD is not expected to run into additional difficulties. Accurate calculations of the topological susceptibility however require representative ensembles of, say, a few hundred statistically independent gauge-field configurations to be generated. This part of the calculation usually consumes most of the computer time and may rapidly become prohibitively expensive at small lattice spacings [10]. At present, computations of the susceptibility on lattices similar to the ones considered here are therefore not easily extended to QCD with light sea quarks. We wish to thank Leonardo Giusti for helpful discussions on various issues related to this work. All numerical calculations were performed on a dedicated PC cluster at CERN. We are grateful to the CERN management for providing the required funds and to the CERN IT Department for technical support. F. P. acknowledges financial support by an EIF Marie Curie fellowship of the European Community's Seventh Framework Programme under contract number PIEF-GA-2009-235904. A.1 Dirac operator and renormalization constants The lattice theory considered in this paper is set up as usual, using the Wilson gauge action and the standard O(a)-improved Wilson-Dirac operator. The notation and normalizations are as in ref. [7]. In particular, c sw and c A denote the coefficients of the Pauli term in the Dirac operator and the O(a) term required for the improvement of the axial current. Here these coefficients were set to the values given by the nonperturbatively determined interpolation formula quoted in ref. [19] (see table 3). The values of the renormalization constant Z A of the axial current listed in table 3 were obtained by evaluating the interpolation formula given in ref. [20]. In the case (13) of the renormalization constant Z P of the pseudo-scalar quark density, the quoted values are the ones required to pass from the lattice normalization of the density to the one in the MS scheme of dimensional regularization at normalization scale µ = 2 GeV. The constant was calculated in two steps, first passing from the lattice to the renormalization-group-invariant normalization [21] and then from there to the MS scheme [22]. A.2 Quark masses The renormalized quark mass in the MS scheme is given by where m is the bare current-quark mass, m q = m 0 − m c the subtracted bare mass and m c the critical bare mass. Here and below, b X (where X = A, P, . . .) denotes an improvement coefficient required to cancel lattice effects proportional to am q . At fixed gauge coupling, the current quark mass is related to the subtracted bare mass through Using the Schrödinger functional, the coefficients Z and b m −b A +b P were determined non-perturbatively by Guagnelli et al. [23] (see table 4). Also shown in table 4 is the current quark mass at one value of the hopping parameter κ = (8 + 2am 0 ) −1 . Together with eq. (A.2), these data allow the current quark mass to be estimated at larger values of κ, where a direct computation on the lattices considered in this paper tends to be compromised by the presence of accidental near-zero modes of the Dirac operator. A.3 Renormalization of the mode number The renormalization and improvement properties of the mode number were discussed in detail in ref. [4]. In particular, it was shown there that is only known to one-loop order of perturbation theory [24,4]. for some specified (small) value of ǫ. This choice ensures that h(x) provides a uniform approximation to the step function in the range |x| ≥ √ ǫ, with maximal absolute deviation equal to 1 2 δ. Moreover, inspection shows that h(x) decreases monotonically in the transition region − √ ǫ ≤ x ≤ √ ǫ. For a given degree n and transition range ǫ, the coefficients of the minmax polynomial P (y) can be computed numerically using standard techniques. An efficient procedure was described in ref. [25], for example. The mass M * ∝ M is then determined through As explained in appendix B of ref. [4], this convention is intended to minimize the deviation Tr{P M − R 4 M } . In the present context, other choices of M * would however do just as well, since eq. (2.2) is expected to hold for any M . Small approximation errors δ are achieved with moderately high degrees n if ǫ is not very small. For n = 32 and ǫ = 0.01, for example, one obtains
2010-10-06T12:03:09.000Z
2010-08-04T00:00:00.000
{ "year": 2010, "sha1": "2eabb79f57e5ca846184144d6065c041ff9f999c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2010)110.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2eabb79f57e5ca846184144d6065c041ff9f999c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10177251
pes2o/s2orc
v3-fos-license
Imagining Success: Multiple Achievement Goals and the Effectiveness of Imagery ABSTRACT Imagery (richly imagining carrying out a task successfully) is a popular performance-enhancement tool in many domains. This experiment sought to test whether pursuing two achievement goals (vs. one) benefits performance after an imagery exercise. We examined mastery goals (aiming to improve skill level) and performance goals (aiming to outperform others) among 65 tennis players who were assigned to a mastery goal condition, a performance goal condition, or a mastery goal and performance goal condition. After reading instructions for a service task, which included the goal manipulation, participants completed 20 tennis services. They then completed an imagery exercise and, finally, completed another 20 services. Postimagery service performance was better in the dual-goal condition than in the other conditions. Imagery is a mental performance improvement technique that involves "programming" body and mind with the purpose of responding optimally in a performance situation. The technique is based on the notion that an imagined action activates an internal cognitive representation that is the same as the cognitive representation underlying the "actual" action (see Holmes & Collins, 2001). Imagery has become one of the most popular psychological techniques to improve performance in athletic (e.g., Hall, 2001), academic (e.g., Vasquez & Buehler, 2007), and work contexts (e.g., Neck & Manz, 1992). Imagery is especially well studied in sports, and research in that area supports the claim that imagery improves a wide range of relevant, beneficial outcomes such as objective performance, exercise frequency, attentional focus, game-related tension, and confidence, but also a quicker recovery from injury-outcomes that have been examined across a range of sports contexts (Callow, Hardy, & Hall, 2001;Calmels, Berthoumieux, & Arripe-Longueville, 2004;Cupal & Brewer, 2001;Hale & Whitehouse, 1998;Page, Sime, & Nordell, 1999;Smith, Wright, & Cantwell, 2008). Although those studies clearly support the claim that imagery is a technique that can facilitate the beneficial effects of training and exercise, and certain factors have been identified that moderate the effectiveness of engaging in imagery, little is known on the psychological, especially motivational, factors that may facilitate the effectiveness of imagery itself. Providing insight into such potential facilitators holds practical utility because understanding relatively changeable conditions, such as motivation, that influence the effects of imagery may allow people to optimize their application of imagery techniques and to get the most out of training and practice. The current research explicitly examines PETTLEP imagery, as PETTLEP has been the most effective form of imagery compared to other forms of imagery, and set out to test under which achievement goal conditions PET-TLEP imagery has the strongest effect on tennis service performance. Prior research showed that, compared with pursuing either a mastery goal (aim to improve skill) or a performance goal (aim to outperform others), pursuing both goals predicted greater motivation to carry out imagery (Cumming, Hall, Harwood, & Gammage, 2002). In line with this study, we propose that multiple (as opposed to one) achievement goals should also lead to superior performance after PETTLEP imagery. PETTLEP imagery PETTLEP is a specific imagery technique that is considered most effective and is currently most prominent. The acronym indicates that physical, environment, task, timing, learning, emotional, and perspective relevant aspects of the imagery all need to be aligned with the aspects of the actual activity. This means the physical state (e.g., clothes and attributes), the environment (e.g., the playing field), the specific movements involved in the actual activity, and the speed of the actual activity all need to be the same as in the actual movement. Further, the athlete should adapt the imagery to his or her current skill level, to experience the emotions he or she would experience in a game situation, and finally to view the situation from his or her own perspective, as it would be seen if he or she was to carry out the activity himor herself (although sometimes using a different perspective may also be useful; see Callow & Roberts, 2010). Due to these criteria, the PETTLEP method has been argued to lead to a relatively realistic representation compared with other manners of imagery that did not include all these elements (Wakefield & Smith, 2012). Indeed, the PETTLEP imagery method has been shown to be relatively effective compared to other imagery methods (Smith, Wright, Allsopp, & Westhead, 2007;Wright & Smith, 2009), which may stem from the method's root notion of functional equivalence (Jeannerod, 1999). Because research indicates that engaging in imagery and actually carrying out an activity involve the same brain regions, imagery may help to strengthen the neural pathways that are involved in actual activities (Decety & Grèzes, 1999). Accordingly, the activity or movement to which imagery is applied should be as similar as possible to the actual activity or movement, and this functional equivalence criterion is precisely what PETTLEP is based on. Recent research again supports the utility of the method by showing that the PETTLEP method makes it easier for people to create a more vivid image in their mind when using PETTLEP imagery, compared to using more traditional methods (Anuar, Cumming, & Williams, 2016). Although general ability to conduct mental imagery is an important predictor of imagery effectiveness, breaking down the aforementioned method into its elements makes clear that those elements are also vastly different in their nature. This may be taken to suggest that effectively engaging in different aspects of this type of imaging would be facilitated by different goals and mind-sets. As we elaborate next, different PETTLEP elements just discussed align strategically with different goals, mastery, and performance achievement goals. Accordingly, we predicted that simultaneously pursing a mastery and a performance goal enhances the effect of imagery, relative to pursuing only one goal. Achievement goals Achievement goal theory (Elliot, 2005;Nicholls, 1984) distinguishes mastery goals and performance goals. Mastery goals imply a task-based standard of competence, meaning that individuals aim for a certain standard that reflects their skill level or proficiency on a task. Performance goals imply an interpersonal comparison standard, meaning that individuals aim for a certain standard of competence relative to others. Achievement goal theory also distinguishes, within mastery and within performance, the possibility that individuals gear their efforts either toward positive possibilities (competence) or to avoiding negative possibilities (incompetence). Avoidance goals almost exclusively predict detrimental outcomes (Van Yperen, Blaga, & Postmes, 2014), and therefore we decided not to include these in our study. Hence, when using mastery we refer to mastery-approach goals, and when using performance we refer to performance-approach goals. Accordingly, we conceptualize mastery goal as the aim to learn, to develop competence, to improve skill level. Performance goal is conceptualized as the aim to perform better than others. Certain elements of PETTLEP imagery align well with a mastery goal, for example, imagining performing a movement in a technically skilled way and adapting the task to one's individual skill level. Such elements a mastery-motivated individual would be more motivated to engage in because it directly serves the person's focal goal. Other elements of PETTLEP imagery align well with a performance goal, for example, imagining scoring points and envisaging the competitive game context and the emotions one feels while winning a point. When these respective goals are activated, not only are individuals more motivated to engage in activities that support these goals but the mental activation of these goals likely makes imagery of these elements in particular easier. That is, when the cognitive structures that are associated with these goals become active, closely related structures are more easily cognitively accessible. This means that the activation of the goal makes the experiences (such as those related to the elements of imagery) more easily accessible and therefore makes the imagery more effective. Hence, alignment means that there is a correspondence or fit between the goal that individuals are pursuing and the behaviors they are exhibiting (in their imagery), that the behaviors serve their focal goal and that the distinct imagery elements are more accessible because they are cognitively associated with the respective goal construct. When individuals pursue both goals, the means (elements, imagines behaviors) used in the activity align with individuals' motivation serve both their goals. Although the latter has not been tested with regard to achievement goals, these achievement goals are highly relevant because they apply strongly whenever individuals find themselves in achievement situations. In addition, it should be noted that the notion that imagery serves several functions is not new; it can be traced back to important work such as Paivio (1985). In support of our reasoning, Cumming et al. (2002) found that athletes with a balance of mastery and performance orientations also reported "greater motivation to perform the functions of imagery that would help them to maximize their performance" (p. 127), but their research did not test whether such individuals would, indeed, show better performance than individuals with only one of these goal. Similar to Cumming and colleagues' rationale, we suggest that PETTLEP imagery may be instrumental in the pursuit of mastery and performance goals and, as such, the overall utility of the imagery is greater when individuals pursue both these goals (Kruglanski et al., 2002). As a consequence, individuals are more motivated and committed to the imagery activity (it is more useful to them) and the imagery may be more effective (Shah & Kruglanski, 2000). Hence, we hypothesized that imagery leads to better performance among individuals with both a mastery and a performance goal, compared to individuals who pursue either one of these goals. We tested this hypothesis in the context of a tennis service exercise with players of moderate skill levels. Participants Participants were 65 tennis players (24.6% women, M age ¼ 27.09, SD age ¼ 11.32) with classification of between Levels 3 and 5 according to the classification of the Royal Dutch Lawn Tennis Association (corresponding to between Levels 4 and 7 of the United States National Tennis Rating Program). Male and female participants were distributed equally across the conditions. Erring on the conservative side, we aimed to recruit 30 participants per cell, yet we did not attain this number, and we decided to stop collecting data at a point in time when it became impossible to recruit participants, because they had a vacation from their training program (see next). Procedure This experiment was approved by the ethics committee at the first author's institution. We approached teachers from the Tennis Association's training academy and asked them to suggest study participation to their students (tennis coaches in training). Interested players were sent an e-mail with information about the study and a request to complete a brief online questionnaire, among other things, to measure demographic variables. Participants were then approached for appointments to participate in an on-site session. During the experiment, each participant individually joined the researcher on the tennis court. First, participants were given another opportunity to read the study information. After signing informed consent, participants were given the opportunity to serve 12 balls as a warm-up (no further warm-up instructions were given to the participants). Subsequently, participants received written instructions for the task (see next), and the researcher placed the target in the service box (see Figure 1). These instructions integrally included one of the three manipulations (mastery, performance, or both; see next for details), to which participants were assigned randomly. After reading instructions, participants carried out the first service task. After making 20 services, participants read instructions for the PETTLEP imagery. When participants indicated they had completed the imagery, they again carried out 20 services. After, they completed two manipulation check items. Achievement goal manipulation and service task instructions The manipulation was identical to that used by Murayama and Elliot (2011), and instructions were adapted to the task. Participants in the mastery condition read, "This exercise will help you to improve your tennis skills. Focus on the exercise and do your best to improve your tennis skills." Participants in the performance condition read, "This exercise allows you to show that your tennis skills are better than those of others. Focus on the exercise and do your best to perform better than other tennis players." Participants in the dual-goal condition read, "This exercise will help you to improve your tennis skills, and to show that your tennis skills are better than those of other tennis players. Focus on the exercise and do your best to improve your tennis skills and to show that your tennis skills are better than those of other tennis players." Next, all read the instructions for the service task: The task is to carry out 20 services and, in doing so, to try to hit the target in the service box. Hitting the target results in two points. Not hitting the target, but hitting the service box, results in one point. Not hitting the service box results in zero points. Take as much time for each service as you think you need. The instruction/manipulation ended by repeating the second sentence of the manipulation. PETTLEP instructions The instructions were based on instructions used by Smith et al. (2008) among golf players. Adapted to the tennis service, the instructions read, Later on, stand at the baseline with your racket in your hand. Imagine serving 20 balls and hitting the target every time. In your mind, try to imitate as complete an experience of the serve as possible without actually moving. Feel the movements that the body makes during the service, small responses in your muscles are normal and don't need to be suppressed. You see how you toss the ball in the air and next how the ball makes its way from the face of the racket to the target. Feel the emotions you experience before you're about to serve and feel the emotions you experience when you see the target being hit. Imagine that, after every service, you take the time to prepare for the next ball. Visualize the 20 services in real time and envisage the situation as if you are seeing it through your own eyes. Start visualizing the 20 services when you are ready. When you're done, let the researcher know. Performance measurement The researcher kept track of the number of times participants missed (0 points: M 1 ¼ 7.86, SD 1 ¼ 2.63; M 2 ¼ 7.38, SD 2 ¼ 2.67), the number of times they hit the service box (1 point: M 1 ¼ 10.78, SD 1 ¼ 2.71; M 2 ¼ 10.88, SD 2 ¼ 2.63), and the number of times they hit the target (2 points: M 1 ¼ 1.34, SD 1 ¼ 1.34; M 2 ¼ 1.75, SD 2 ¼ 1.52). For the service task, Wilson US Open balls were used. The target was a doormat of 50 � 30 cm that was placed in the service box on the deuce side (see the supplement). Note that the correlations between the pre-and postmeasurements were .36, .60, and .71, for the number of times on target, the number of times hitting the service box, and the number of misses, respectively. Manipulation checks At the end of the experiment, participants responded to two items (Elliot & Murayama, 2008), namely, "My goal during the service task was to improve my tennis skills" (M ¼ 3.55, SD ¼ 1.21) and "My goal during the service task was to do better than other tennis players" (M ¼ 3.12, SD ¼ 1.40). Participants responded to these items on a scale ranging from 1 (completely disagree) to 5 (completely agree). Preparatory analyses We first examined the manipulation checks. Considering that we assumed that the dual-goal condition to activate both goals, and that we assumed the single-goal condition to activate the goal it was intended to activate more strongly compared to the condition in which the other single goal was being activated, the following should be observed: The mastery item should be rated lower in the performance-goal-only condition, compared to the other two conditions, and the performance goal item should be rated lower in the mastery-goal-only condition, compared to the other two conditions. This pattern was indeed observed as the mastery goal was rated lower in the performance condition (M ¼ 3.09, SD ¼ 1.34) compared to the mastery condition (M ¼ 3.82, SD ¼ 1.01), The number of misses and the number of points in the service box were highly negatively correlated, r ¼ À .84, suggesting that they may reflect a similar variable and together unitarily reflect performance on the task. However, the correlation between the number of services in the service box and the number of services on the target was negative as well, r ¼ À .26, and the correlation between the number of misses and the number of points on the target was only r ¼ À .31. Although it may be intuitive that the two "positive" indicators together reflect performance, the negative correlation indicates that these two indicators together would not be a valid representation of the same construct (e.g., performance). That is, when two variables represent the same underlying construct, they should be positively correlated. The negative correlation shows that the two do not represent the same construct in a valid way and indicate that it would not be desirable to add these together. Therefore, we decided to analyze the three performance indicators separately. Main analyses We expected that, after imagery, individuals would perform better in the dual-goal condition compared with the other conditions. Table 1 shows the mean values of the three separate indicators within each condition on the preimagery task and the postimagery task. It also shows the post-minus-pre difference that reflects the degree of improvement after (vs. before) imagery within each condition. In line with recommendations by others (Valentine, Aloe, & Lau, 2015), Table 2 shows the unstandardized mean differences and the Cohen's D effect size for the differences between the three conditions on all of the variables that are also shown in Table 1. Note that D reflects the difference between the conditions, divided by the overall (pooled) standard deviation, thus providing an indication of how many standard deviations difference is observed between the conditions. First, the number of misses (0 points) was smaller in the dual-goal condition (M ¼ 6.33, SD ¼ 2.39) compared with the mastery condition (M ¼ 7.55, SD ¼ 2.69) and the performance condition (M ¼ 8.23, SD ¼ 2.69). Note that the "improvement" post imagery indicates that participants in the dual-goal condition, on average, missed once fewer, whereas this number was close to zero for the performance goal condition and was 0.50 for the mastery goal condition. Second, the number of points in the service box (1 point) was greater in the dual-goal condition (M ¼ 12.29, SD ¼ 2.53) compared with the mastery condition (M ¼ 10.55, SD ¼ 2.79) and the performance condition (M ¼ 9.86, SD ¼ 1.98). Again note that the "improvement" post imagery indicates participants in the dual-goal condition, on average, had nearly one (0.95) more hits in the service box after imagery, compared with before imagery. This number was close to zero (0.09) for the mastery only condition and was even negative (À 0.73) for the performance goal condition. Third, the differences between the conditions in terms of the number of hits on the target (2 points) was much smaller and even went slightly in the opposite direction. That is, although all conditions improved slightly, the improvement in the mastery condition (0.45) and the performance goal condition (0.68) was slightly larger than the improvement in the dual-goal condition (0.10). As Tables 1 and 2 show, the number of hits on the target was very low and the effects observed on that indicator are much smaller than the effects that were consistently found on the other two indicators of performance improvement. Considering these two indicators, only participants in the dual-goal condition showed a clear pattern of performance improvement on both. Discussion Results indicated that participants in the dual-goal condition served inside the service box more often, and missed less often, than participants in the other two conditions. Furthermore, the rate of improvement with regard to these two indicators was consistently greater in the dual-goal condition. The finding that participants in the dual-goal condition exhibit fewer misses and more services in the service box suggests that their performance was indeed better compared to the other two conditions. It seems similarly unlikely that this finding is due to a practice effect, because there is no reason to indicate that individuals with both performance and mastery goals benefit more from practice in general. For example, Van Yperen and Duda (1999) found no link between performance orientation and performance improvement in sports, Linnenbrink (2005) did not find that a dual-goal condition (in an educational context) led to greater improvement than a performance goal condition, and Valle et al. (2003) similarly found no difference between a dual-goal condition and a mastery goal condition. Because (a) studies do not suggest that performance goals typically have a strong impact on skill improvement and (b) neither goal seems to add to the effect of the other on improvement, it is plausible that the effects we observed stem from the imagery. As such, the data suggest that it may be beneficial to pursue both achievement goals when using imagery. Related to the methodological choice that participants always completed 20 services, there are negative correlations between some of the three indicators. The methodological choice also necessitates that the indicators are not wholly independent of each other, which makes it debatable whether analyzing them as separate dependent variables is valid. Important to note is that the mean score of the on-target serves (2-point scores) was very low, suggesting that hitting the target was too difficult. The choice of this target was made because the doormat is used in tennis training sometimes to aid in practicing aiming of the serve, so we attempted to make the exercise as realistic as possible in order to reduce participants' awareness of being in a unique study situation. It should be noted that rewarding services within the service area was done for the same reason, experimental realism, but that such an incentive might lead participants to satisfice, to settle for a less ambitious outcome. This would mean that the 1-point score is not a valid indicator of performance and especially not of improvement. We would argue, however, that it is more plausible that hitting the target was too difficult. Moreover, combined with the improvement in terms of the reduction in misses, the increase in the number of services within the service box indicates quite clearly that performance improvement was greatest in the dual-goal condition. In our analysis, in addition to examining mean values in the conditions, we also looked at the difference between post-and preimagery scores as indicators of improvement. A criticism of such a difference scores approach is that (especially in pre-post test designs) the two scores composing the difference score are correlated, which can decrease the reliability of the difference score itself. However, Trafimow (2015) recently argued and showed that this mostly becomes a problem in cases where pre-post correlations are extremely large, which was not the case in this study (see Materials section). One limitation of the current research is that we manipulated the achievement goals in a relatively vague manner. We closely followed manipulations that have been used in experimental research previously (Murayama & Elliot, 2011), but the goals could also have been formulated in regard to more specific outcomes, for example, because tennis serving obviously has many different skill-relevant elements on which an athlete might focus individually but which the mastery goal manipulation did not specify. Likewise, the context we used does not explicitly create a social context in which performance goals would become highly relevant. However, at the same time, it should be noted that a social context is also not present in most other achievement goal research (e.g., Murayama & Elliot, 2011). The goal of competing with others is a goal that every individual understands, and it can easily be activated even in the absence of an explicit social context, guiding motivation, and behavior. Although the purpose of PETTLEP is to create the best possible functional match to a real performance situation, one might wonder whether the manipulation of achievement goals would likewise provide a functional match. Our study examined performance in a practice situation, but one may indeed wonder if the effects extend to a game situation. Achievement goal research seems to assume that goals will indeed have similar effects across situations, and research suggests that, although individuals' achievement goals differ significantly across achievement domains, the effects of these goals within a domain are the same as in other domains (e.g., Van Yperen, Hamstra, & van der Klauw, 2011). Nevertheless, important questions for future research and practice are whether athletes' achievement goals remain the same across game and practice contexts and whether imagery and achievement goals effects are moderated further by other variables. Previous research applying similar PETTLEP techniques (e.g., Smith et al., 2007;Smith et al., 2008) has found convincing evidence for the utility of PETTLEP imagery techniques. In the current research, we found that the dual-goal condition showed improvement and that the mastery goal condition showed some improvement in terms of the number of misses. As such, one might wonder whether this research replicated the classis benefit of PETTLEP imagery. In this regard, it should be noted that Smith and colleagues used much longer imagery training, lasting several days or weeks, whereas we examined only a brief exercise. It seems likely that, even among performance goal individuals, long-term PETTLEP imagery would be helpful. All in all, this is the first research investigating athletelevel motivational factors that influence the effectiveness of imagery, and the results suggest that pursuing both a mastery goal and a performance goal is more beneficial when using imagery, compared with pursuing only one of these goals, a finding that aligns well with the notion that imagery can serve multiple different functions (Paivio, 1985). More research into factors that facilitate the effectiveness of imagery is needed, because having knowledge about how to get the most out of implementing mental techniques is important for users of imagery techniques across a range of performance situations such as the workplace, the classroom, and the sports field. For example, part of our argumentation was that activation of both achievement goals would make cognitive structures needed for successful imagery of all PETTLEP elements more easily accessible. This could be taken to imply that individuals, under such conditions, show more complete imagery of the situation, and it would be interesting to examine whether this would indeed contribute to the observed effects. Also, such research could be useful to those who encourage others' goals (e.g., as coaches do with athletes or teachers with students) and to those who try to motivate themselves to maximize the benefits of imagery specifically, and of training in general.
2018-04-03T05:27:58.941Z
2016-12-07T00:00:00.000
{ "year": 2016, "sha1": "61cf9f704e85dae5cf08ac03ae6efae844fe10ca", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/01973533.2016.1255947?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "61cf9f704e85dae5cf08ac03ae6efae844fe10ca", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
12262140
pes2o/s2orc
v3-fos-license
Alternative splicing results in RET isoforms with distinct trafficking properties The RET gene encodes a receptor tyrosine kinase that is alternatively spliced to two protein isoforms that differ in their C-terminal peptide sequences (RET9, RET51). These unique C-terminal tails produce distinct subcellular localizations and intracellular trafficking properties, which affect downstream signaling. INTRODUCTION The RET gene encodes a receptor tyrosine kinase (RTK) that is widely expressed in neuroendocrine tissues (reviewed in Arighi et al., 2005). RET-knockout mice are carried to term but die shortly after birth, displaying a lack of enteric innervation and kidney dysplasia (Schuchardt et al., 1994). In humans, a number of loss-of-function mutations have been identified throughout the RET gene that lead to Hirschsprung disease, a congenital disorder characterized by a loss of enteric neurons in the distal portions of the colon and small intestine (reviewed in Amiel and Lyonnet, 2001;Burzynski et al., 2009). Conversely, mutations that result in constitutively active receptors have been linked to tumors of various neuroendocrine tissues, including the thyroid, parathyroid, and adrenal glands (reviewed in Arighi et al., 2005;Lai et al., 2007). In addition, RET plays roles in spermatogenesis, development of the sensory, sympathetic, and parasympathetic nervous systems, and maintenance of adult midbrain dopaminergic neurons (Kramer et al., 2007;reviewed in Arighi et al., 2005). Activation of RET occurs through the formation of a multimeric signaling complex consisting of RET's soluble ligand glial cell line-derived neurotrophic factor (GDNF) and a membrane-bound patterns (Myers and Mulligan, 2004;Hickey et al., 2009). Finally, in cell-based assays, RET51 possesses an increased intrinsic ability to transform cells and induce neurite outgrowth, potentially suggesting a higher level of signaling downstream of this isoform (Pasini et al., 1997;Rossel et al., 1997;Iwashita et al., 1999;Le Hir et al., 2000). In vivo animal models have also shown differences in RET isoform functions. Mice that express monoisoformic mouse/human hybrid Ret9 (Ret 9/9 mice) or Ret51 (Ret 51/51 ) show distinct phenotypes (de Graaff et al., 2001). Ret 9/9 mice are viable, whereas Ret 51/51 mice show kidney dysplasia and delayed development of the enteric nervous system (de Graaff et al., 2001;Barlow et al., 2003). Similarly, zebrafish expressing only ret9 protein formed an intact enteric nervous system, suggesting that ret51 is not required for this process (Heanue and Pachnis, 2008). In addition, it has been shown that the RET51 transcript is not present until later stages of human development (Ivanchuk et al., 1998), which may account for its dispensability in many of these studies. Overall, these data are consistent with the ability of RET9 and RET51 to influence both similar and distinct gene expression patterns downstream of their activation (Myers and Mulligan, 2004;Hickey et al., 2009). Exocytic trafficking of membrane proteins such as RTKs is complex and requires interactions with a host of proteins that target nascent peptides to the endoplasmic reticulum (ER), ensure their proper folding, mediate posttranslational modifications, and escort them to the plasma membrane (reviewed in Cross et al., 2009). Once embedded in the plasma membrane, RTKs can bind extracellular ligands and become activated. Activation results in recognition of the RTK by the endocytic machinery of the cell, internalization, and targeting to intracellular structures for degradation or recycling back to the plasma membrane (reviewed in Maxfield and McGraw, 2004;Sorkin and von Zastrow, 2009). Thus, exocytosis and endocytosis are critical mediators of RTK subcellular localization. RTK signaling can be enhanced, directed toward or away from individual signaling pathways, or quenched, depending on the subcellular compartment in which it resides. Together mechanisms that direct RTK trafficking can have direct and profound effects on downstream signaling. Although comprehensive studies of RET exocytosis and endocytosis are lacking, some insights into these processes have been provided. RET9 and RET51 are rapidly glycosylated within the ER to produce a 155-kDa immature glycoprotein (Takahashi et al., 1991). Further processing occurs within the Golgi apparatus, resulting in expression of a fully glycosylated, mature, 175-kDa RET molecule on the plasma membrane (van Weering et al., 1998). Previously, we showed that RET activation leads to rapid internalization and trafficking of the molecule to early endosomes, where it continues to activate downstream signaling pathways (Richardson et al., 2006). The E3 ubiquitin ligase c-CBL can ubiquitinate RET, targeting the receptor for lysosomal degradation (Scott et al., 2005;Richardson et al., 2009), and a role for the proteasome in RET degradation has also been suggested (Pierchala et al., 2006;Tsui and Pierchala, 2010). coreceptor, GDNF family receptor α 1 (GFRα1; reviewed in Airaksinen and Saarma, 2002). After ligand binding, RET initiates a number of downstream signaling cascades involved in cell growth, differentiation, proliferation, and migration, most notably phosphoinositide 3-kinase/AKT, extracellular-signal regulated kinase (ERK)/ mitogen-activated protein (MAP) kinase, SRC, β-catenin, and STAT3 pathways (Besset et al., 2000;Schuringa et al., 2001;Encinas et al., 2004;Gujral et al., 2008). The RET gene is alternatively spliced at its 3′ end to produce multiple protein isoforms. RET9 and RET51 are the most highly expressed and differ only in their 9 and 51 COOH-terminal amino acids, respectively ( Figure 1A; Tahira et al., 1990). In most tissues examined these isoforms are coexpressed; however, the RET9 transcript is often expressed at much greater levels relative to RET51 in human tissues (Ivanchuk et al., 1998;Le Hir et al., 2000;Lee et al., 2003). Although the primary signaling hub shared by all RET isoforms is tyrosine 1062 (Asai et al., 1996;De Vita et al., 2000;Hayashi et al., 2000), differences in the downstream amino acid sequences in RET9 and RET51, and the presence of an additional signaling hub in RET51 at Y1096 ( Figure 1A; Besset et al., 2000;Jain et al., 2006), have been suggested to contribute to both signal redundancy between RET9 and RET51 and certain distinct properties of each of the isoforms. Each isoform can induce a unique autophosphorylation pattern on intracellular RET tyrosine residues and assemble a distinct complement of adaptor proteins (Tsui-Pierchala et al., 2002). Specifically, RET9 and RET51 are known to differentially bind SHC, GRB2, c-CBL, and SHANK3 (Lorenzo et al., 1997;Schuetz et al., 2004;Scott et al., 2005). Gene expression profiling revealed that RET9 and RET51 activity can induce overlapping but distinct gene expression Cell lysates from HEK293 cells stably expressing RET9 or RET51 were separated by SDS-PAGE and immunoblotted with pan-RET, isoform-specific RET9 or RET51, and anti-γ-tubulin antibodies. (C, D) Equal volumes of cell lysate from retinoic acid-treated SH-SY5Y neuroblastoma cells (C) or 4-d-old rat pup GI cocultures (D) that coexpress RET9 and RET51 were separated by SDS-PAGE and immunoblotted with RET9 or RET51 isoform-specific antibodies. n1-n3 represent GI cocultures established from three individual rat pups. Figure S1B; Ponnambalam et al., 1996). We further investigated the localization of RET9 and RET51 in SH-SY5Y cells. As shown in Figure 2, D and E, both RET9 and RET51 could be seen in punctate structures near the plasma membrane. However, only RET51 was found to directly colocalize with EPN1-ENTH-green fluorescent protein (GFP), a marker of the inner leaflet of the plasma membrane (Ford et al., 2002), in unstimulated cells. Together these results are consistent with our observations in Figure 1 and suggest that immature RET9 accumulates in the Golgi, whereas RET51 matures relatively more effectively, resulting in a greater plasma membrane presence of this isoform. It was previously shown that RET9 transcript is expressed at higher levels relative to RET51 in multiple organisms and tissues (Ivanchuk et al., 1998;Le Hir et al., 2000;Lee et al., 2003). Because RET9 transcript levels appeared to correlate with the amount of immature RET9 protein in our various cell models (Figure 1, B-D), we investigated whether simple differences in RET9 and RET51 transcript expression could result in relatively more RET9 protein translation and an accumulation of immature RET9 in the ER and/or Golgi. Using quantitative real-time PCR, we confirmed that RET9 transcripts are expressed at higher levels relative to RET51 in SH-SY5Y cells and our primary rat GI cocultures ( Figure 3A). HEK293 cell lines stably expressing monoisoformic RET9 or RET51 cDNA had similar levels of RET9 and RET51 transcripts, respectively ( Figure 3A), likely due to their expression from the same promoter and lack of mRNA processing by the spliceosome. Of interest, despite similar transcript levels, a greater proportion of RET9 protein was still found in its immature form relative to RET51 in the HEK293 stable cell lines ( Figure 1B), suggesting that higher transcript levels are not responsible for the accumulation of immature RET9. We confirmed this by performing a series of transient transfections in HEK293 cells in which we titrated the amount of transfected RET9 or RET51 plasmid DNA from 0 to 1 μg. As expected, decreases in transcript levels, by incrementally decreasing the DNA copy number delivered into cells, were unable to increase the amount of mature RET9 protein seen in Western blots (Supplemental Figure S2). Independent of transcript levels and the amount of nascent protein delivered to the ER, a number of chaperone proteins reside within the ER, Golgi, and cytoplasm to assist in protein folding, mediate posttranslational modifications, or delay export of misfolded and incorrectly modified proteins (Cross et al., 2009). We did not expect that posttranslational processing at these sites would be responsible for the accumulation of immature RET9 because RET9 and RET51 possess identical extracellular domains-the region of RET located within the ER and Golgi during posttranslational processing. To confirm this, we compared Ret isoform maturation in primary rat GI cocultures grown at 30 and 37°C, as growth at reduced temperatures has been shown to improve maturation of difficult-to-fold proteins, including several RET mutants, that are retained in the ER and Golgi (Kjaer and Ibanez, 2003;Park et al., 2009). Growth at 30°C did not affect the relative ratio of immature to mature Ret9 or Ret51 (unpublished data). Our observation that the pool of immature RET9 protein is maintained independent of RET DNA copy number within the cell and growth temperature suggests that the accumulation of immature RET9 is not due to either higher levels of RET9 transcript or less efficient folding or posttranslational modification of the protein, relative to RET51. Instead, it appears that the cell has an intrinsic ability to deliver RET51 to the plasma membrane, via the ER and Golgi, more efficiently relative to RET9. RET EC KD TM Here, we present a comprehensive study of RET9 and RET51 subcellular localization and intracellular trafficking. We show that high levels of immature RET9 accumulate in the Golgi, whereas RET51 is efficiently matured and trafficked to the plasma membrane. In response to GDNF stimulation, both isoforms are targeted to the lysosome for degradation; however, RET9 appears to be targeted directly and efficiently, whereas a portion of RET51 molecules are recycled back to the membrane, making them available for continued signaling. Our data suggest that RET9 and RET51 possess distinct subcellular localizations and trafficking properties that help explain previously established isoform-specific downstream signaling characteristics. RET51 matures more efficiently than RET9 Four cell-based model systems were used throughout this study. HEK293 cell lines transiently or stably expressing GFRα1 and either RET9 or RET51 (Myers and Mulligan, 2004) were used to analyze the individual contributions of these isoforms to intracellular processes ( Figure 1B). SH-SY5Y neuroblastoma cells provide a model in which both RET9 and RET51 are expressed from RET's endogenous promoter at higher levels relative to primary tissues ( Figure 1C). HeLa cells transiently expressing RET9 or RET51 were well suited for direct visualization of intracellular trafficking by confocal microscopy due to their flat morphology and increased levels of RET expression relative to stable cell lines or cells endogenously expressing RET. Finally, a coculture model consisting of primary myenteric neurons, smooth muscle cells, and glia harvested from 4-d-old rats provided an endogenous system to which data from all other cell-based models could be compared ( Figure 1D; Rodrigues et al., 2011). In initial investigations, we saw a marked difference in the relative distribution of the mature (175 kDa) and immature (155 kDa) protein forms of each RET isoform. In HEK293 cell lines stably expressing RET9 or RET51, mature and immature RET51 were expressed in nearly equivalent levels; however, RET9 appeared predominantly in the immature form ( Figure 1B). Using isoform-specific antibodies for RET9 and RET51 (Supplemental Table S1), we noted a similar pattern of endogenous RET isoform expression in SH-SY5Y cells ( Figure 1C) and an even more striking distribution in primary rat gastrointestinal (GI) cocultures, where immature Ret51 was nearly undetectable but equivalent amounts of both mature and immature Ret9 were observed ( Figure 1D). Isoform-specific differences in mature protein expression, primarily decreased levels of mature RET9 protein, were most prominent in primary neurons, which had the lowest overall RET expression ( Figure 1D), and least obvious in cells stably overexpressing RET ( Figure 1B). The glycosylation events that result in RET's maturation from 155 to 175 kDa have been shown to occur in the Golgi (Cosma et al., 1998), suggesting that immature RET9 might accumulate in this region. We determined the localization of immature RET9 by immunofluorescence confocal microscopy in primary rat enteric neurons. In agreement with previously published data (Heanue and Pachnis, 2008;Rodrigues et al., 2011), RET was expressed in Hu antigen D (HuD, ELAV4)-positive neurons but not the smooth muscle or glia of our GI coculture model (Supplemental Figure S1A). Despite low expression of Ret in these neurons ( Figure 1D), we observed a region of Ret9 perinuclear accumulation that was not found in cells stained for Ret51 (Figure 2, A-C). Furthermore, the area enriched for Ret9 staining overlaid areas weak in HuD staining. HuD is a neuron-specific, mRNA-binding protein, primarily localized to the cytoplasm, where it interacts with ribosomes and is not expected to associate with perinuclear Golgi stacks (Burry and Smith, 2006), supporting RET51 internalizes more rapidly than RET9 On the basis of our earlier observations that a greater proportion of RET51 is found in the mature 175-kDa form relative to RET9 ( Figure 1, B-D) and that RET51 appears to be more abundant in the plasma membrane of SH-SY5Y cells (Figure 2, D and E), we investigated what effect localization may have on activation and internalization of RET9 and RET51 from the cell surface. Initially, we investigated the phosphorylation properties of RET isoforms by immunoprecipitating RET9 and RET51 from SH-SY5Y cell lysates and probing with pan-RET and anti-phosphotyrosine antibodies. As predicted, we found only the mature, 175-kDa band of RET9 and RET51 to be phosphorylated ( Figure 4A). Furthermore, the RET51 isoform was relatively more phosphorylated, although a portion of this increase may be due to the additional phosphotyrosines at Y1090 and Y1096 that are not present in RET9 ( Figure 1A). Next, we investigated the movement of RET9 and RET51 into the cell after activation by GDNF using a simple cell surface biotinylation approach. Cells were either left untreated or incubated with GDNF for 20 min, followed by biotinylation of surface proteins, cell Because RET9 and RET51 differ exclusively by amino acids in their C-terminal tails, we predicted that these distinct sequences are functionally responsible for differences in isoform localization. To determine whether one or both isoform tails were important for mediating RET maturation and transport to the plasma membrane, we developed two novel RET mutant constructs. First, we replaced the first nine unique amino acids (1064-1072) of RET51 with the corresponding amino acids from RET9 (referred to as 9-in-51; Figure 3B). In the second construct, the RET9 C-terminal amino acids (1064-1072) were added terminally, immediately downstream of the intact RET51 sequence (referred to as 51+9; Figure 3B). Of interest, these two constructs displayed different phenotypes. The 9-in-51 construct displayed a phenotype similar to that of wild-type RET9, appearing predominantly as immature protein, whereas 51+9 showed increased levels of mature protein relative to wild-type RET9, a phenotype identical to that of wild-type RET51 ( Figure 3C). This suggests that the RET9 C-terminal amino acid tail, within the context of its normal upstream sequences, is responsible for the retention of this isoform within the Golgi apparatus. FIGURE 2: Immature RET9 accumulates in a perinuclear region. (A) GI cocultures were isolated from 4-d-old rat pups and plated on collagen-coated glass coverslips, grown for 60 h in 5% FBS, serum starved overnight, fixed, and stained for the neuronal marker HuD (red) and for Ret9 or Ret51 (green). Arrows highlight areas devoid of HuD staining. Scale bar, 40 μm. (B) Pixel intensity profiles were calculated along the dashed lines in the Merge panels of A and plotted. (C) Mean Ret9 or Ret51 signal intensity was calculated for two circular ROIs (a perinuclear ROI devoid of HuD staining [pnuc] and a cytoplasmic ROI with positive HuD staining [cyto]) in >30 enteric neurons stained for Ret9 or Ret51 and HuD. The ratio of mean Ret signal intensity (cyto/pnuc) was determined and plotted. Cells were derived from five individual rat pups for each isoform. (D) Retinoic acid-treated SH-SY5Y cells were transiently transfected with the plasma membrane marker EPN1-ENTH-GFP (green), fixed, and stained for RET9 or RET51 (red). Inserts in top left-hand corners are magnifications of the boxed region in the merged panels, highlighting regions of welldefined plasma membrane (arrows). Scale bars, 10 μm in full image, 2 μm in magnification. (E) Mean pixel intensity of the RET and EPN1-ENTH-GFP signals was determined from 3-pixel-wide lines drawn along areas of well-defined plasma membrane where no neighboring cells were present. The ratio of RET9 or RET51/EPN1-ENTH-GFP intensity (I RET /I EPN1-PH-GFP ) for 3-pixel bins was calculated along the entire line and plotted. n ≥ 10 lines for both RET9 and RET51. *p < 0.005. * to each other in early endosomes and late endosomes/lysosomes. Consistent with Figure 4C, we observed a more robust internalization of RET51 to early endosomes, relative to RET9 ( Figure 4D). Within 5 min of GDNF addition, RET51 was localized to significantly more EEA1-positive endosomes than before GDNF addition (p < 0.005; Figure 4D). In comparison, a significant increase in RET9-containing endosomes was achieved only after 30 min of GDNF treatment (p < 0.005, Figure 4D). The percentage of early endosomes containing RET9 increased from 0 to 30 min, whereas RET51containing endosomes reached a maximum by 15 min ( Figure 4D; discussed later). At both 15 and 60 min after GDNF addition, RET51 occupied a significantly greater percentage of early endosomes then did RET9 (p < 0.005; Figure 4D). Unexpectedly, after 30 min of GDNF treatment, RET51 was present in significantly fewer early endosomes than it was at 15 or 60 min after addition of GDNF (p < 0.005, Figure 4D; discussed later). In addition, the distance of each RETpositive early endosome from the plasma membrane was measured to determine whether RET-positive endosomes trafficked deeper into the cytoplasm over time. Once again, endosomes containing RET51 trafficked significantly deeper into the cytoplasm by 5 min of GDNF treatment (relative to the 0-min time point; p < 0.005; Figure 4D), whereas RET9-positive endosomes required 30 min to internalize significantly deeper relative to unstimulated cells (p < 0.005; Figure 4D). Furthermore, RET9 endosomes increased their distance from the plasma membrane throughout the entire 60-min time course; however, the mean distance of RET51 endosomes from the plasma membrane was relatively unchanged at time points >15 min ( Figure 4D; discussed later). As expected, both RET isoforms took longer to reach lysosomes (as determined by colocalization with LAMP2) than early endosomes, and, again, a significant increase of RET51 within these structures relative to unstimulated cells was observed earlier than that of RET9 (30 vs. 60 min, respectively; Figure 4D). Again, RET9 colocalization with lysosomes gradually increased throughout the time course, whereas lysosomal localization of RET51 showed little change after 15 min of GDNF treatment ( Figure 4D; discussed later). Isoform-specific recycling of RET51 Our observation that RET51 occupies significantly fewer early endosomes after 30 min of GDNF stimulation (relative to the 15-and 60-min time points; Figure 4C), as well as the apparent lack of its accumulation in early endosomes and lysosomes at time points >15 min, led us to investigate whether RET51 could enter endosomal recycling pathways. We previously used a novel biotinylation assay to provide evidence of RET51 recycling when it is overexpressed in HeLa cells (Richardson and Mulligan, 2010). Using this assay, we observed RET colocalization with cell surface biotin (Supplemental Figure S3). We noted colocalization of RET51 lysis, and the collection of biotinylated protein on streptavidincoated beads. As expected, both surface-localized RET9 and RET51 were biotinylated in the absence of GDNF. However, upon GDNF treatment, we observed a greater loss of the surface-localized RET51 (46% of original surface RET51) than RET9 (25% of original surface RET9; Figure 4B) Biotinylation of cell surface RET in stimulated and unstimulated SH-SY5Y cells was further used to monitor RET internalization over time. Consistent with Figure 4B, Figure 4C shows that both RET isoforms were internalized upon incubation with GDNF, although RET51 internalization was more robust. Thirteen percent of the total surface-localized RET51 was internalized within the first 5 min of GDNF stimulation. However, nearly 30 min of GDNF stimulation was required to internalize a similar fraction of the surface-localized RET9 (16%; Figure 4C). Together these results suggest a more robust and efficient internalization of RET51 in response to GDNF stimulation relative to RET9. We further investigated RET9 and RET51 trafficking in HeLa cells using a method that combines confocal microscopy, surface biotinylation, and immunofluorescence to observe RET internalization from the plasma membrane and trafficking to the lysosome (Richardson and Mulligan, 2010). This method allowed us to restrict our analyses to the specific subset of intracellular vesicles that had been formed by plasma membrane invagination after addition of GDNF. Thus, we were able to quantify the movement of RET9 and RET51 relative with cell surface biotin before GDNF addition and after 30 min of incubation with GDNF but not after 15 min of GDNF stimulation (Richardson and Mulligan, 2010;Supplemental Figure S3B). At no time point were we able to observe colocalization between RET9 and cell surface biotin (Supplemental Figure S3A). Using our SH-SY5Y cell model, we expanded on the experiments in Figure 2D to examine the colocalization of RET51 with the plasma membrane marker EPN1-ENTH-GFP at various times after GDNF addition. We found a significant depletion of RET51/EPN1-ENTH-GFP colocalization within 5 min of GDNF addition ( Figure 5, A and B). However, colocalization was quickly reestablished by 20 min after GDNF addition. This observation was confirmed via surface biotinylation and Western blotting ( Figure 5C). Together, these data clearly demonstrate a portion of RET51 protein recycling back to the plasma membrane after internalization. This observation appears to be consistent across multiple cell lines and under various levels of RET51 expression. RTKs that undergo recycling, such as epidermal growth factor receptor (EGFR) and MET, have been shown to avoid degradation, resulting in extended signaling potential (reviewed in Parachoniak and Park, 2012). To determine whether recycling prevents degradation of RET51, prolonging its signaling capacity within the membrane and cytosol relative to RET9, we treated primary rat GI cocultures with brefeldin A to block RET maturation. Brefeldin A is an inhibitor of Golgi function that is known to trap nascent peptides in the ER, preventing their delivery to the Golgi (Runeberg-Roos et al., 2007). Brefeldin A addition to rat GI cocultures resulted in an accumulation of immature Ret9, and to a lesser extent immature Ret51, over time, indicating that transport of both Ret isoforms from the ER to Golgi and subsequent glycosylation was blocked ( Figure 5D). In contrast, the mature protein band of Ret9 decreased in intensity over time, becoming nearly undetectable by 3 h ( Figure 5D). The mature protein band of Ret51 maintained a constant intensity throughout the 3-h time course, indicating greater stability of the mature form of this isoform relative to Ret9 ( Figure 5D). To confirm that this result was not due to differences in relative RET9 and RET51 protein concentration or the use of individual antibodies to visualize each isoform, we repeated the experiment in our monoisoformic HEK293 stable cell lines using a pan-RET antibody ( Figure 5E). In the presence of brefeldin A and GDNF, the pool of mature RET9 protein was quickly degraded. Levels of mature RET51 decreased more FIGURE 4: RET51 internalizes more efficiently than RET9. (A) Retinoic acid-treated SH-SY5Y cells were serum starved, incubated with or without 100 ng/ml GDNF for 20 min, and lysed. Cell lysates were immunoprecipitated (IP) with antibodies specific to RET9 or RET51, separated by SDS-PAGE, and immunoblotted (IB) with panRET or anti-phosphotyrosine (pTyr) antibodies. A portion of cell lysate was retained and probed with anti-γ-tubulin as a loading control. (B) Retinoic acid-treated SH-SY5Y cells were serum starved, incubated with or without GDNF for 20 min, cooled to 4°C, surface biotinylated, and harvested. Biotinylated proteins were recovered on streptavidincoated beads and immunoblotted for RET9 or RET51. Densitometry, performed via ImageJ, was used to determine the percentage of the initial surface-localized biotinylated RET remaining after incubation with GDNF as indicated in lower panels. (C) Retinoic acid-treated SH-SY5Y cells were serum starved and surface biotinylated. After biotinylation, cells were returned to 37°C in the presence of 100 ng/ ml GDNF for the indicated times before the remaining cell surface biotin was stripped with MeSNa buffer, and cells were lysed. Cell lysates and biotinylated proteins were separated by SDS-PAGE and immunoblotted for RET9 or RET51. TP, total protein control; sample slowly, again indicating that this isoform is degraded at a slower rate relative to mature RET9 protein ( Figure 5E). Our data are consistent with RET51, but not RET9, being able to recycle back to the plasma membrane after activation and internalization. Recycling of RTKs has been shown to affect downstream signaling by mediating longer, sustained signaling after ligand binding (Parachoniak and Park, 2012). We previously showed that phosphorylated RET is able to activate the ERK/MAP kinase signaling pathway after internalization from early endosomes within the cytoplasm (Richardson et al., 2006). Therefore, we investigated ERK1/2 signaling downstream of RET9 and RET51 in HEK293 monoisoformic cell lines. Consistent with the more rapid internalization and recycling of the activated RET51 receptor, we found that activation of ERK1/2 initiated earlier and was sustained for a longer duration in RET51expressing cells than in a RET9-expressing cell line ( Figure 5F). This suggests that recycling of RET51 plays an important role in modulating signaling downstream of this isoform. Indeed, recycling of RET51 provides a strong mechanistic explanation for the increased transforming capacity of this isoform. RET51 recycles through a RAB11-associated pathway To confirm RET51's presence in a recycling pathway, we performed colocalization studies with various markers of recycling endosomes. Barysch et al. (2009Barysch et al. ( , 2010 developed an in vitro system that efficiently sorts cointernalized molecules (e.g., transferrin and lowdensity lipoprotein) into separate vesicles based on their final destination (e.g., recycling vs. lysosome). We used this system to examine the ability of RET9 and RET51 to be sorted away from transferrincontaining endosomes in postnuclear supernatants (PNSs). Transferrin is a well-characterized small iron-binding, soluble protein (Hopkins, 1983;Hopkins and Trowbridge, 1983;Maxfield and McGraw, 2004). After binding its cell surface receptor, transferrin is rapidly internalized to early endosomes, releases its bound iron molecules, detaches from its receptor, and sorts to recycling endosomes before being exocytosed (Hopkins, 1983;Hopkins and Trowbridge, 1983;Maxfield and McGraw, 2004). PNSs were produced from SH-SY5Y cells that had been stimulated with GDNF for 15 min and Alexa 488-labeled transferrin for 5 min. PNSs were then incubated either on ice or at 37°C for 45 min in the presence of an Figure 1D and grown in DMEM containing 5% FBS. Cells were treated with 5 μg/ml brefeldin A for the indicated times. Cells were lysed and immunoblotted for RET9 or RET51. (E) HEK293 cells stably expressing GFRα1 and RET9 or RET51 were serum starved overnight and treated with 100 ng/ml GDNF and 5 μg/ml Brefeldin A for the indicated times. Cell lysates were immunoblotted for RET using a pan-RET antibody. (F) HEK293 cells stably expressing RET9 or RET51 were serum starved overnight and incubated with GDNF for the indicated times. Cell lysates were immunoblotted using pan-RET, pERK1/2, and tubulin antibodies. Blots are representative of three independent experiments. surface (van der Sluijs et al., 1992), whereas RAB11 endosomes participate in a slower recycling pathway that involves movement through the endocytic recycling compartment (ERC), a perinuclear collection of RAB11-positive vesicles (Ullrich et al., 1996). We used colocalization studies in whole cells to determine whether RET51 or RET9 colocalized with either of these recycling endosome markers. HeLa cells were transiently transfected with RET9 or RET51 and RAB4-GFP or RAB11-GFP to determine whether RET is found within endosomes decorated with these markers. Both RET9 and RET51 displayed low levels of colocalization with RAB4-positive vesicles that was unchanged after addition of GDNF (Supplemental Figure S4A). Of interest, although cells expressing RET9 showed no changes in their intracellular distribution of RET (Supplemental Figure S4, A and B), GDNF addition to cells expressing RET51 and either RAB4 or RAB11 resulted in a perinuclear accumulation of RET51 ( Figure 6C and Supplemental Figure S4A). In cells expressing RET51 and RAB11, strong colocalization was seen between these two proteins at this perinuclear location ( Figure 6C). As stated earlier, no GDNFinduced change in RET9 distribution was seen in cells expressing RAB4 or RAB11 (Supplemental Figure S4). In addition, little colocalization was seen between RET9 and RAB11, and RET9 did not display a perinuclear accumulation after GDNF addition (Supplemental Figure S4B). The observation of RET51 in RAB11-positive endosomes further supports our model in which this isoform, and not RET9, is sorted to a recycling pathway. In addition, the perinuclear localization of RET51/RAB11 vesicles resembles the ERC, of which RAB11 is a marker, and suggests that RET51 recycles through the ERC via the slower, RAB11-associated recycling pathway. DISCUSSION Until recently, mechanisms of protein trafficking were believed to play passive roles in cellular signaling and metabolism. However, numerous data now show that the processes of exocytosis and endocytosis can modulate the function of transmembrane receptors by controlling their signaling potential in both space and time (reviewed in Sorkin and Von Zastrow, 2002;Cross et al., 2009;Gould and Lippincott-Schwartz, 2009;Parachoniak and Park, 2012). Here we showed that the two most abundant isoforms of RET-RET9 and RET51-display distinct subcellular localizations, trafficking properties, and downstream signaling capacity (summarized in Figure 7) in multiple model cell culture systems and primary rat enteric neurons. These distinct properties may contribute to some of the previously described differences in RET isoform downstream signaling and functional effects and their individual roles in development. Inefficient maturation of RET9 results in distinct subcellular localizations of RET isoforms Our data show that RET isoforms are coexpressed and that the RET9 transcript is expressed at higher levels relative to RET51 in SH-SY5Y cells and the enteric ganglia of neonatal rats. However, this does not translate into higher levels of RET9 protein at the cell surface. Here, we show an accumulation of immature RET9 protein in a perinuclear region that colocalizes with areas of the trans-Golgi network ( Figure 2, A-C, and Supplemental Figure S1B). Relative to RET51, which is rapidly processed to its mature form, RET9 is present in lower quantities in the plasma membrane (Figure 2, D and E, and Supplemental Figure S3). This suggests that RET9's accumulation in the Golgi limits the amount of functional RET9 receptors at the cell surface (Figure 2, D and E, and Supplemental Figure S3). However, it should be noted that only a relatively minor proportion of both RET9 and RET51 was localized on the plasma membrane (Figure 2, A and D, and Supplemental Figures S1, S3, and S4). Direct visualization artificial ATP-regenerating system to allow endosomal sorting to occur. After this sorting reaction was completed, endosomes were fixed to coverslips and stained for RET9 or RET51. Values of Pearson's r were then calculated for each colocalized image. By comparing r from samples incubated on ice to those incubated at 37°C, we calculated the ratio r 37C /r ice . As seen in Figure 6B, r values for images stained for RET51 were relatively unchanged under the two conditions (r 37C /r ice = 0.99), whereas RET9 images showed a significant decrease in colocalization after the sorting reaction was completed (r 37C /r ice = 0.62; p < 0.05). This suggests that RET51 remains along with transferrin in the recycling compartment during intracellular sorting, whereas a proportion of RET9 is sorted away from transferrin-containing endosomes. We next investigated whether RET9 and RET51 were present in recycling vesicles of intact HeLa cells. RAB4 and RAB11 are commonly used markers of recycling endosomes. RAB4 is generally associated with recycling endosomes that rapidly return to the cell FIGURE 6: RET51 colocalizes with markers of the endosomalrecycling pathway. (A) Retinoic acid-treated SH-SY5Y cells were collected and incubated with GDNF for 10 min and Alexa 488-labeled transferrin for 5 min at 37°C. Postnuclear supernatants were prepared using a ball homogenizer. Samples were either kept on ice or incubated for 45 min in rat-brain cytosol extract at 37°C before attachment to coverslips and fixation in 3% paraformaldehyde. Coverslips were immunostained for RET9 or RET51 and imaged via a wide-field fluorescence microscope. (B) Pearson's r was calculated using ImageJ for each image, and the ratio of r at 37°C to that at ice temperature (r 37C /r ice ) was determined and plotted. Data are representative of two independent experiments with four fields of view analyzed in each (∼8000 transferrin-positive endosomes). (C) HeLa cells were transiently transfected with RET51 and RAB11-GFP, incubated with 100 ng/ml GDNF where indicated, fixed, and stained for RET51 (red). Pearson's r was calculated for each image and is displayed in the Merge panel. Colocalization was determined using ImageJ as indicated in Materials and Methods. Images are representative of one of three independent experiments. Scale bars, 10 μm (A), 40 μm (C). (Tsui-Pierchala et al., 2002;Scott et al., 2005). It has been hypothesized that this was due to targeting of RET9 and RET51 to individual lipid subdomains of the plasma membrane (Tsui-Pierchala et al., 2002;Scott et al., 2005). Although this may also be a factor, our results suggest that the limited amounts of RET found at the cell surface, the relatively greater concentration of RET51 in the plasma membrane, and the rapid isoform-specific sorting of internalized RET51 to the recycling pathway would minimize the formation of heterodimers and make detection of any heterodimers that do form difficult in vivo. Isoform-specific trafficking postinternalization We previously showed that RET is rapidly internalized from the plasma membrane after activation and observed constitutively active, membrane-bound RET proteins in various structures of the endocytic pathway (Richardson et al., 2006. Here, our analyses of these processes in an isoform-specific context revealed two key differences between RET9 and RET51 intracellular trafficking: the rapid internalization of RET51 relative to RET9 (Figure 4) and the ability of RET51 to recycle to the membrane after internalization, whereas RET9 is targeted to the lysosome (Figures 4-7). The mechanisms underlying the recycling of plasma membrane proteins are poorly understood. It was postulated that membrane proteins were continuously recycled to the cell surface in the absence of ubiquitination through a rapid, short-loop recycling pathway (Dunn et al., 1989;Mayor et al., 1993;Katzmann et al., 2002). More recently, a number of targeting motifs have been identified within various non-RTK membrane proteins that direct their recycling back to the plasma membrane (Johnson et al., 2001;Wang et al., 2004;Paasche et al., 2005). This motif-based receptor recycling appears to be a lengthier process, as endocytosed receptors are first sorted to the ERC before returning to the membrane via a long-loop recycling pathway (Johnson et al., 2001). MET and EGFR are the best-understood RTKs with regard to their recycling properties (reviewed in Parachoniak and Park, 2012). MET recycling appears to be directed by the ARF-binding protein GGA3, which is recruited to MET upon activation by its ligand HGF. GGA3 binding occurs in the early endosome and initiates MET trafficking into RAB4-positive recycling vesicles, which are predominantly associated with the rapid, short-loop recycling pathway (Parachoniak et al., 2011). Of interest, knockout of GGA3 inhibited MET recycling and led to a reduction in ERK1/2 signaling, suggesting that recycling plays an important role in sustaining downstream signaling from MET (Parachoniak et al., 2011). EGFR recycling appears to be more complex, in that it has been shown to occur through both rapid, short-loop recycling and the long-loop, ERC-associated pathway (reviewed in Sorkin and Goh, 2009). Extracellular ligand concentration and dissociation of ligand from EGFR within the early endosome appear to be the major determinants of whether EGFR is targeted for recycling or degradation (Sigismund et al., 2008;Sorkin and Goh, 2009;Parachoniak et al., 2011). Our results indicate that RET51 can be found in both RAB4-positive (short loop) and RAB11-positive (long loop) endosomes, suggesting a recycling mechanism similar to that for EGFR. However, the GDNF-induced increase of RET51 colocalization with RAB11 and its accumulation in a perinuclear ERC-like structure suggest that RET51 is predominantly recycled through the long-loop pathway ( Figure 6C; Ullrich et al., 1996). On the basis of our observation that RET51 undergoes GDNF-induced colocalization with RAB11, a marker of motif-based recycling and the ERC, we predict that the RET51 tail contains a yet-unidentified recycling motif that mediates its return to the plasma membrane via this pathway. Hirata et al. (2010) showed that acidification of the Golgi is required for glycosylation of RET9 and EGFR in this organelle. Of interest, they found initial glycosylation of RET9 (from 120-kDa nascent protein to the 155-kDa immature form) can proceed in the ER when the V-ATPase responsible for vesicle acidification is blocked, and expression of immature RET9 at the plasma membrane can be detected in V-ATPase-inhibited cells (Hirata et al., 2010). Another factor that has been shown to affect the ability of RET to traffic through the exocytic pathway is the phosphorylation status of threonine 675 in the juxtamembrane region of RET (Li et al., 2012). T675 can be phosphorylated by protein kinase C, and this phosphorylation was found to increase RET levels on the cell surface (Li et al., 2012). Although this residue is common to both isoforms and therefore not responsible for the phenotypes noted here, this finding indicates that posttranslational modifications to the cytoplasmically exposed region of RET can affect its transport from the ER and Golgi to the cell surface. Our tail mutant study (Figure 3, B and C) suggests that the RET9 tail, in conjunction with a portion of the RET9 and RET51 shared sequence immediately Nterminal to the tail region, contains posttranslational modifications such as phosphorylation and/or important interactions with cytosolic proteins that regulate the movement of this isoform through the Golgi. Of interest, RET9 and RET51 heterodimers have not been detected in vivo despite the coexpression of isoforms and their identi-FIGURE 7: Proposed mechanism of RET isoform-specific trafficking. Diagrammatic summary of the data presented here. RET9 is transcribed at higher levels relative to RET51, and both transcripts are delivered to the ER for translation. RET9 and RET51 proceed through the secretory pathway, where RET51 is efficiently delivered to the membrane, whereas a portion of RET9 is retained in the Golgi. On the surface RET9 and RET51 are both able to bind the GDNF/GFRα1 ligand complex (L), autophosphorylate, and activate downstream signaling cascades. After activation RET51 is internalized to endosomes more rapidly than RET9. From endosomes, RET9 is delivered to lysosomes, whereas a portion of RET51 recycles back to the plasma membrane through a RAB11-positive recycling pathway and the rest is targeted to the lysosome. Arrow width is indicative of relative amounts of transcript or protein following each path. RET51 transcript relative to normal tissues, whereas RET9 transcript levels remained constant (Le Hir et al., 2000). This may indicate that increased expression of RET51, but not of RET9, is selected for during tumor development. Our model suggests that a relative increase in RET51 levels would be a more advantageous growth-promoting alteration for the cell due to the increased amounts of cell surface protein and the slower degradation of the RET51 isoform relative to RET9. Conclusions Here we presented a comprehensive study of the subcellular localization and intracellular trafficking of RET receptor isoforms in multiple cell models. Intracellular trafficking is known to have important roles in downstream signaling from membrane receptors. Not only does it provide a mechanism for ablating these signals, but it also provides additional mechanisms for modulating and targeting signals originating from membrane receptors both spatially and temporally. Our data indicate that differential subcellular localizations and trafficking properties of RET isoforms result in distinct localizations and degradative properties of RET9 and RET51 that may affect the signaling capacities of these isoforms. Alternative splicing is now recognized as a key step in evolution, allowing a single gene to encode multiple proteins with differing functions. Substitution of 9 or 51 unique amino acids at the C-terminal tail of RET is a striking example of the effect that alternative splicing can have on subcellular trafficking and localization of proteins within the cell. Cell culture and transfections HeLa, HEK293, and SH-SY5Y cells were cultured in DMEM with 10% fetal bovine serum (FBS; Sigma-Aldrich, Oakville, Canada). Retinoic acid, 10 μM, was added to SH-SY5Y cells 24 h before each experiment to induce RET expression. Transient transfections were performed using Lipofectamine 2000 (Invitrogen, Burlington, Canada) with previously described plasmids encoding GFRα1, RET9, RET51, RAB4-GFP, and RAB11-GFP (Gujral et al., 2006;Johns et al., 2009). EPN1-ENTH-GFP was cloned by inserting cDNA encoding EGFP and two copies of the ENTH domain of EPN1 (amino acids 130-243 of isoform A) in-frame into the pcDNA3.1+ mammalian expression vector (Invitrogen). EPN1 cDNA was obtained from Open Biosystems (Lafayette, CO). The 9-in-51 and 51+9 constructs were developed by PCR-based, site-directed mutagenesis of RET51 cDNA. The 9-in-51 construct was created by mutating the first nine unique codons (encoding amino acids 1064-1072) of the RET51 tail to the corresponding sequence from RET9. The 9+51 construct was created by inserting the same RET9 tail coding sequence directly 3′ of the unique RET51 tail. Neonatal rat GI cocultures, consisting of myenteric neurons, glia, and smooth muscle cells, were obtained from 4-d-old Sprague Dawley rats (Charles River, Wilmington, MA) as previously described (Rodrigues et al., 2011). Briefly, the smooth muscle and myentric plexus layers were peeled from the entire length of the small intestine. This tissue was minced, digested with 0.125% trypsin II and 0.5 mg/ml collagenase F (Sigma-Aldrich) in Hank's solution buffered with 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, and plated in DMEM containing 5% FBS (Sigma-Aldrich), 5 mg/ml soybean trypsin inhibitor (Sigma-Aldrich), and 2 mg/ml ciprofloxacin (Bayer, Wayne, NJ). Cells were allowed to adhere for ∼60 h, the final 12 h without serum, where indicated. GDNF (PeproTech, Rocky Hill, NJ), cycloheximide (BioShop, Burlington, Canada), and brefeldin A (BioShop) were added where indicated at final concentrations of 100 ng/ml, 100 μg/ml, and 5 μg/ml, respectively. Tsui and Pierchala (2010) recently investigated retrograde signal propagation downstream of RET in primary sympathetic and sensory neurons. They showed that GDNF applied directly to axonal tips and not the cell body sustained neuronal survival in dorsal root ganglion (DRG) sensory neurons but not in superior cervical ganglion (SCG) sympathetic neurons, which are seen to rapidly apoptose (Tsui and Pierchala, 2010). Tsui and Pierchala provided two possible explanations for these observations. First, they suggested that the increased levels of RET9 in DRG neurons allow for a stronger survival signal to be conveyed to the cell body, and, second, they observe a more rapid degradation of RET51 in the axons of SCG neurons relative to DRG neurons. Our data, along with previous studies that highlight RET51's relatively higher transforming and signaling capacity, indicate that the more rapid loss of RET51 from the axonal tips of SCG neurons may be the key determinant in the different phenotypes of SCG and DRG neurons observed by Tsui and Pierchala. However, because these are vastly different systems (DRG, SCG, enteric neurons, cultured SH-SY5Y cells), one must be careful in drawing parallels between the data. Perhaps more important, together these data clearly indicate that RET can play different roles in different cell lineages and highlight the importance of the use of specific primary tissues in the validation of experimental observations. Degradation of RET9 and RET51 Previous studies used the translation inhibitor cycloheximide (CHX) to evaluate RET protein stability (Scott et al., 2005;Pierchala et al., 2006;Richardson et al., 2009). Using this method, Pierchala et al. (2006) and Scott et al. (2005) showed RET9 to be more stable than RET51 in primary sympathetic neurons and monoisoformic stable cell lines. We believe that the brefeldin A investigations presented here ( Figure 5) complement these previous studies by providing further insight into the translation, posttranslational processing, trafficking, and degradation of RET isoforms. In the cell models investigated here, when Golgi function is impaired by brefeldin A and the transport of immature RET to the plasma membrane is blocked, RET9 is degraded more rapidly than RET51, as it cannot recycle ( Figure 5D). We hypothesize that in CHX studies, which assess total intracellular and surface-bound RET protein, the large pool of immature RET9 within the exocytic pathway (Figure 1, B-D) continues to mature in the presence of CHX, increasing the overall perceived stability of total RET9 protein relative to RET51. Therefore our data suggest that the time required for maturation of the immature RET9 pool, its delivery to the plasma membrane, internalization, and degradation may be responsible for the previously observed greater stability of RET9 relative to RET51 in CHX-based assays. It is well established that wild-type and oncogenic mutant forms of RET51 proteins have greater transforming and differentiation-inducing ability relative to similar wild-type and mutant RET9 proteins in cell-based assays (colony formation, fibroblast transformation, PC12 cell differentiation; Pasini et al., 1997;Rossel et al., 1997;Iwashita et al., 1999;Le Hir et al., 2000). RET51's ability to recycle and avoid degradation may contribute to this greater transforming ability. Recycling of RET51 increases its residency within the plasma membrane and cytoplasm relative to RET9, prolonging its time in the two regions of the cell where initiation of RET signal transduction is most active (Richardson et al., 2006). Here we showed that recycling of RET51 leads to more rapid and prolonged ERK1/2 activation from this isoform relative to signaling downstream of RET9. In addition, a study of primary pheochromocytoma tumors containing activating RET mutations previously showed increased levels of and dividing by the total number of vesicles counted in the EEA1 or LAMP2 channel. A vesicle was defined as an object >10 pixels in size in which each pixel had an intensity >10 (using a standard 8-bit pixel intensity scale of 0-255). These objects were counted using the ImageJ Analyze Particles function (Abramoff et al., 2004). Triple colocalization of RET, biotin, and EEA1 or LAMP2 was determined using the ImageJ (National Institutes of Health, Bethesda, MD) RG2B Colocalization (developed by Christopher Philip Mauer, Northwestern University, Chicago, IL) plug-in for ImageJ (Abramoff et al., 2004). First, a hybrid image of the biotin and EEA1 or LAMP2 channels was produced in which any pixel that did not meet our colocalization criteria was replaced with a zero-intensity (black) pixel. To be considered colocalized, a pixel had to have an intensity >10 in both channels and the intensity of the biotin signal had to be within 60% of EEA1/LAMP2, or vice versa. If a pixel met these criteria, it was replaced by a gray-scale pixel with intensity equal to the mean of the two channels. The hybrid image was then colocalized to the corresponding RET channel in the same manner to produce a final image representing the colocalization of all three channels. Determining endosome distance from membrane The triple-colocalization images (produced as just described) were remerged with the corresponding biotin channel. The distance in pixels from each endosome in the triple-colocalization image to the nearest portion of plasma membrane (visualized by the biotin signal) was calculated using ImageJ and plotted. In vitro endosome-sorting assay This protocol was previously published in detail (Barysch et al., 2010). Briefly, SH-SY5Y cells were treated with retinoic acid overnight, collected into a conical tube, and incubated with GDNF for 10 min, followed by addition of Alexa 488-labeled transferrin for 5 min at 37°C. Cells were placed on ice and washed repeatedly with PBS. Postnuclear supernatants were prepared using a custom-built ball homogenizer. Samples were divided into two lots and either kept on ice or incubated for 45 min in rat brain cytosol extract at 37°C in the presence of an ATP-regenerating system (as described by Barysch et al., 2010). Cells were again cooled on ice before attachment to coverslips by centrifugation and fixation in 3% paraformaldehyde. Coverslips were immunostained for RET9 or RET51 (as described here and in Supplementary Table 1) using isoformspecific primary antibodies and a Cy3-labeled secondary antibody and imaged using a Leica (Wetzlar, Germany) wide-field fluorescence microscope. Images from the RET9 or RET51 and transferrin channels were overlaid, and Pearson's coefficients of colocalization were calculated using the Mander's coefficients plug-in for ImageJ (developed by Tony Collins, Wright Cell Imaging Facility, Toronto, Canada; Abramoff et al., 2004). Data are representative of two independent experiments with four fields of view analyzed in each (∼8000 transferrin-containing endosomes). Statistical analysis All data are expressed as means ± SE. A two-tailed, unpaired Student's t test was used to determine statistical significance. Immunoblotting SDS-PAGE and immunoblotting were performed as previously described using the antibodies and dilutions outlined in Supplemental Table S1. Quantitative real-time PCR Quantitative real-time PCR was performed as described previously . Briefly, standard curves were obtained by plotting crossing threshold (Ct) values for serial dilutions of linearized RET cDNA (human or rat). The Ct values for 200 ng of total RNA, isolated from HEK293, SH-SY5Y, or rat GI cocultures using TRIzol (Invitrogen), were obtained and fitted to the standard curves to determine the approximate copy number of RET transcripts in each sample. The RET isoform-specific primer pairs CRT14B/KRT14D (RET9) and CRT14B/KRT20A (RET51) have been previously described (Ivanchuk et al., 1997). For cells of rat origin, corresponding rat sequence primers KRT14Drat (GTTACAGACAGTTGGGATGGT) and KRT20Arat (ATCGGCTCTCGTGAGTGGT) were used in conjunction with CRT14B (conserved sequence in human and rat). Immunofluorescence Immunofluorescence was performed as described previously Richardson and Mulligan, 2010), with the following modifications: for GI coculture studies, glass coverslips were coated with 20 ng/ml bovine collagen, and HeLa cells were plated on coverslips coated with 0.2% gelatin. Antibodies, and concentrations used are summarized in Supplemental Table S1. Biotinylated proteins were detected with Alexa 594-conjugated streptavidin (Invitrogen) diluted 1:200 in 3% bovine serum albumin (BSA; wt/vol) in phosphate-buffered saline (PBS). Nuclei were visualized with Hoechst 33342 diluted 1:1000 in 3% BSA (wt/vol) in PBS. Biotinylation Biotinylation of cells for assessment of membrane vesicle trafficking and recycling of membrane proteins has been extensively described (Richardson and Mulligan, 2010). To quantify total surface RET, cells were serum starved overnight, incubated with or without 100 ng/ml GDNF (PeproTech) for 20 min, washed twice in PBS, cooled to 4°C, and biotinylated as previously described (Smith et al., 2004). Cells were lysed, and biotinylated protein was recovered using streptavidin-coated beads (Invitrogen), followed by separation by SDS-PAGE before immunoblotting. To determine the amount of internalized RET, cells were serum starved and biotinylated as described. After biotinylation, culture medium containing 100 ng/ml GDNF was added to cells, and they were returned to 37°C for various amounts of time. Cells were again washed with cold PBS and cooled to 4°C, and the remaining surface biotin was stripped with 50 mM MeSNa buffer (two washes of 15 min). Cells were lysed, and biotinylated protein was recovered, as described. Quantitation of relative signal intensity in confocal images ImageJ (Abramoff et al., 2004) was used to calculate the average signal intensity in circular regions of interest (ROIs) in the cytosol of primary enteric neurons or along 3-pixel-wide linear ROIs on sections of SH-SY5Y cell plasma membrane. Mean pixel intensities were determined for each channel of an ROI, and ratios of those signal intensities were calculated. Quantitation of RET-containing vesicles The percentage of endosomes or lysosomes occupied by RET9 or RET51 was determined by counting the number of vesicles in which RET, biotin, and either EEA1 or LAMP2, respectively, colocalized
2016-05-01T19:34:11.743Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "5598163a30c9ada48caee75bb52bb2a6e43b3222", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1091/mbc.e12-02-0114", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5598163a30c9ada48caee75bb52bb2a6e43b3222", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231607406
pes2o/s2orc
v3-fos-license
Comparative Study of Valency-Based Topological Descriptor for Hexagon Star Network A class of graph invariants referred to today as topological indices are inefficient progressively acknowledged by scientific experts and others to be integral assets in the depiction of structural phenomena. The structure of an interconnection network can be represented by a graph. In the network, vertices represent the processor nodes and edges represent the links between the processor nodes. Graph invariants play a vital feature in graph theory and distinguish the structural properties of graphs and networks. A topological descriptor is a numerical total related to a structure that portray the topology of structure and is invariant under structure automorphism. There are various uses of graph theory in the field of basic science. The main notable utilization of a topological descriptor in science was by Wiener in the investigation of paraffin breaking points. In this paper we study the topological descriptor of a newly design hexagon star network. More preciously, we have computed variation of the Randic0 R0, fourth Zagreb M4, fifth Zagreb M5, geometric-arithmetic GA; atom-bond connectivity ABC; harmonic H; symmetric division degree SDD; first redefined Zagreb, second redefined Zagreb, third redefined Zagreb, augmented Zagreb AZI, Albertson A; Irregularity measures, Reformulated Zagreb, and forgotten topological descriptors for hexagon star network. In the analysis of the quantitative structure property relationships (QSPRs) and the quantitative structure activity relationships (QSARs), graph invariants are important tools to approximate and predicate the properties of the biological and chemical compounds. We also gave the numerical and graphical representations comparisons of our different results. Introduction Cheminformatics is another field of modern sciences that connects chemistry, math, and other fields of science. Quantitative structure-activity relationship (QSAR) and Quantitative structure-activity relationship (QSPR) are the principle parts of cheminformatics which are useful to contemplate the physico-chemical properties of networks. A topological descriptor (TD) is a numerical total related to a structure that portray the topology of the structure and is invariant under structure automorphism. There are various uses of graph theory in the field of basic science. The main notable utilization of a TD in science was by Wiener in the investigation of paraffin breaking points [1]. From that point forward, to clarify physico-chemical properties, different TDs have been presented. Topological descriptors (TD) are commonly partitioned into three sorts: degree, distance and spectrum based. The structures of networks can be scientifically demonstrated by a figure. The vertex represents the processor hub and an edge describes the links among processors. The topology of the figure of a network chooses the way by which any two vertices are linked by an edge. The topology of a network system can be used to obtained specific properties without a lot of stretches. The width is resolved as the most extreme separation between any two hubs in the system. The quantity of connections associated with a hub decides the level of that hub. If this number is the equivalent for all hubs in the system, the system is called regular. TD can be effortlessly processed by utilizing the ideas of atomic topology (AT), an order dependent on the graph theory. Actually, AT has demonstrated to be a fantastic apparatus for quick and exact estimation of numerous physicochemical as well as biological properties [2,3]. So as to compute topological indices, basics of AT are utilized where chemical compound is changed over into a graph, considering the atoms and bonds are represented by vertices and edges of a graph. The basic definitions and notations are taken from the book [4]. The number of vertices adjacent to the vertex e is the degree of e, denoted as d e . Degree-Based Indices In this section, we define some degree based topological indices T H ð Þ: represents T H ð Þ as the general, second, and second modified Randic 0 indices if a 6 ¼ 0 2 R; a ¼ 1; and a ¼ À1 respectively. represents T H ð Þ as the general sum connectivity, sum connectivity, Zagreb and hyper Zagreb indices, if a 6 ¼ 0 2 R; a ¼ À1 2 ; a ¼ 1 and a ¼ 2 respectively. Hexagon Star Network Sheet Interconnection systems are significant in PC systems administration and used to change information between the PC and processer. In the most recent couple of years, numerous specialists structured the new interconnection systems. In an equal PC framework, interconnection organize is accustomed to expanding the exhibition. In diagram hypothesis, organize is spoken to as a chart. In this articulation, the processer spoke to by vertex and association between the units spoke to by edges. From the topology of a system, we can decide certain properties. The level of a hub is characterized as the all outnumber of connections associated with that hub. The system is supposed to be regular if each hub in the system has the same degree. In this paper, we define a new interconnection network hexagon star network. This network is a composition of triangles around a hexagon, as shown in Fig. 1. Main Results In this section, we give results, which are used to obtained any degree-based topological descriptors. We obtained exact results of degree-based TD for hexagon star network sheet H. Vetrík [22] introduced a new method to calculate the topological indices and also in [23], we follow the same technique in this paper. Now, we presents a formula, which can be used to obtain any degree based TD. Proof. The graph H contains 6pq þ 5p þ q vertices and 12pq þ 6p edges. Each vertex of H has degree 2 or 4, vertices of H can be partitioned according to their degrees. Let This means that the set V i contains the vertices of degree i. The set of vertices with respect to their degrees are as follows: We partite the edges of H into sets based on degrees of its end vertices. Let Figure 1: The hexagon star network sheet for p ¼ 2, q ¼ 2 The number of edges incident to one vertex of degree 2 and other vertex of degree 4 is 8p þ 4q, so Ä 2;4 ¼ 8p þ 4q: Now, the remaining number of edges are those edges which are incident to two vertices of degree 4, i.e., Ä 4; Hence, After simplification, we get Now we obtained the well-known degree based TD of hexagon star network in the following theorem. the Randic 0 index of H is For a ¼ À1 2 , the Randic′ index After simplification, we get For a ¼ À1, the second modified Zagreb index is We gave graphical comparison of Theorem 4.2 in Fig. 2 and numerical values Tab. 1. In the next theorem, we determined general sum-connectivity index, first Zagreb index and hyper-Zagreb index of the hexagon star network H. We gave graphical comparison of Theorem 4.5 in Fig. 5 the second redefined Zagreb index of H, We gave graphical comparison of Theorem 4.6 in Fig. 6 and numerical values Tab. 5. with several challenging schemes. In the analysis of the quantitative structure property relationships (QSPRs) and the quantitative structureactivity relationships (QSARs), graph invariants are important tools to approximate and predicate the properties of the biological and chemical compounds. In this paper, we study the valency-based topological descriptor for hexagon star network. Funding Statement: The authors received no specific funding for this study. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
2021-01-07T09:08:32.256Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "76f7eec4e3d423f944fd66e8f0118dcab8b70c5e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/csse.2021.014896", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d47f675a23e06aa5ba12a03812a6254588877cd5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }